From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A120C77B73 for ; Fri, 21 Apr 2023 00:07:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41951900003; Thu, 20 Apr 2023 20:07:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A303900002; Thu, 20 Apr 2023 20:07:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21BCF900003; Thu, 20 Apr 2023 20:07:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0BDAF900002 for ; Thu, 20 Apr 2023 20:07:51 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CC9931C680D for ; Fri, 21 Apr 2023 00:07:50 +0000 (UTC) X-FDA: 80703459900.09.D639BA6 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf16.hostedemail.com (Postfix) with ESMTP id 0982318000A for ; Fri, 21 Apr 2023 00:07:47 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=RuILtNev; spf=pass (imf16.hostedemail.com: domain of hughd@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682035668; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/MGuqTLjiPjYtlBNhqdrH1HdY7ljldspC0kCw4HQzZg=; b=QTbZSgOJva5wi1gkd34/eBltSUmr9hk4C15RUZYz7pXgPLkFPHpGy3Ar05m2i3vOWccnmP WOm9Tm4ixWt5z7AN5KjQQLdI62tBTsn24puAOiEO7UYTpOLjSqk97m6dGUvYImfpTJh1w5 3RJ7MHmD7j3mGahZfRNetomo50ZQLA8= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=RuILtNev; spf=pass (imf16.hostedemail.com: domain of hughd@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682035668; a=rsa-sha256; cv=none; b=XG5QCfYurCTQjVE5sf9GGSQdjDbQI7b3Y00XtmBtn98j9Zb8Sci4t9wT/w6WLFMhywn+0p wwv42hmbluKFFruX1B4Rj1qfB4T1BxwI3zy+UceUELBVaf9TwuFjn40vDFuytdLpG/N2gj jaahwiEPcc5A2V4S5nChX1jU7uUXiGk= Received: by mail-yb1-f176.google.com with SMTP id 3f1490d57ef6-b8f72e5bd9bso1207977276.0 for ; Thu, 20 Apr 2023 17:07:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682035667; x=1684627667; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=/MGuqTLjiPjYtlBNhqdrH1HdY7ljldspC0kCw4HQzZg=; b=RuILtNevG/QW7wKBesvjAgjkpf/l8JFnV+vsZdG0Mpoj5sh0DLn7u+iNschKtzIkul w8BeE0pZRwzfEj2bJy3BW9ltKpQkvFCd5voxc/KDmb7AqDUP6bWtEg97VNJWuagHGP1d /gsm00mQxoiofb62d+W/nOQpf+wgaFgXlgYgAlqhAkPFMFWNeTbTuH0CPSPf4qa4cM+/ gqH9OBtIhFFZDwXyj98i2sStQhnmSQCe5KyHDmS9+4bn3InMIwqDn5kO+4GNBGIAswPV hjGjFwsSvVPBy2P+6v/TNXKHL3evoUYE6uAxxwLchG68/qwHFF430507kEYnc6hApIda tdkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682035667; x=1684627667; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/MGuqTLjiPjYtlBNhqdrH1HdY7ljldspC0kCw4HQzZg=; b=jMd52BGsL3hekZN6kipLTPnU50QezA+cCuW5WlIZjIitQC/a+amWLlPCOw0UPvJqha fl73f+IdxJ2VV6857J+ikvkBCo1oQKB05N5aG2/dOGzYx0JnHQzoRgVU4QyleNQYalwa 3kqpoEfzqsWB9a4P6JaCbcqI+syClhuwO4ZMH7m6Sfr3/L3Ssu+lH4j14f+22jLUtVGh Rfrsl4o6eDNuH5ujCSZoEhS5GHpYDZa3yvFkfYqQdvJaWKcecdMYfMgkXOGbqHS68PV2 SzRnMZorR99ATWODkRh1N3s5GnQExgm/H2tmfS+ryoF2mChfe902XX4iquVzHpQB5x8d 74+w== X-Gm-Message-State: AAQBX9d+wGh7mlVBa1/xhDioC3Kd06Epjbq1vVp+uxm6x4fS2y6Z86YJ ic2u2qtSAG+AAnB1CbraSg197g== X-Google-Smtp-Source: AKy350bpWTShrcTudEhOS6gdRwxhc4nmr/3zIaI8ruXZl4oXzSoVnY3LNAwP9NLmA8amd7JjLPqoEg== X-Received: by 2002:a25:dfd0:0:b0:b92:41a6:5637 with SMTP id w199-20020a25dfd0000000b00b9241a65637mr780529ybg.14.1682035666878; Thu, 20 Apr 2023 17:07:46 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id g16-20020a5b0250000000b00b991ece5946sm25594ybp.25.2023.04.20.17.07.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 17:07:46 -0700 (PDT) Date: Thu, 20 Apr 2023 17:07:35 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Charan Teja Kalla cc: akpm@linux-foundation.org, hughd@google.com, willy@infradead.org, markhemm@googlemail.com, rientjes@google.com, surenb@google.com, shakeelb@google.com, fvdl@google.com, quic_pkondeti@quicinc.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH V7 2/2] mm: shmem: implement POSIX_FADV_[WILL|DONT]NEED for shmem In-Reply-To: <631e42b6dffdcc4b4b24f5be715c37f78bf903db.1676378702.git.quic_charante@quicinc.com> Message-ID: <2d56e1dd-68b5-c99e-522f-f8dadf6ad69e@google.com> References: <631e42b6dffdcc4b4b24f5be715c37f78bf903db.1676378702.git.quic_charante@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 0982318000A X-Stat-Signature: 6hs7nxbb3hksa5ag3g9i1uak9o1x1w8c X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1682035667-388686 X-HE-Meta: U2FsdGVkX1/v4jaVd4vlqp75eYOPU1hJY7t36z2c5OsTtOotyXayj+SbFqBxi5wWwR5+LOSLHmI6ywk1Yfy8qXw5tzY6Vw7xP/vHD9HmQALOGT/WzmTyHsG1j/73PNFo/9w7qPmBEZPCrQqo/P8X0wr810dh79iGQNsJBF3OhGi/tlBaKzmxXqWvYDiB+EJUt+9/03MefLnhMdmxY8vp1RFNbZ2jK9ckVfU//45eZjlooDUbDET2E9k9q3wqjkFCzqxvo7iIhRp+QMAsFRsrkrpHPr+OjFbPR8Sdw3V/V32AQli86/ifrgpNZPd4Zg+O/xACswne0I/3cZFjraKw3BLBHvWsrk9aflzdHmde1/slB1pNxCX78IOsUUrNfEA+xIgQPjI/GVICocduAx2TS0VXcyZav2L7qW3H8solhg8qfILaK8UyJKHuATFzvERVKku2Vbfach96WAaoyB31KgRZhmo1s8CLSlPnIuiiZyJGcvYBiImD9Daz1d0oWuQs0he1skbKZHcE/mDNrmB3Qhw6UqZVV6R9igeA/S/nscbfTxBKI1ZHufcWUncZSa8hV0FlDKBdU46aBUCtkg+L9naOG235OK6R7da18unDxb7EeXEELEQ6xoQRE3JiIR07ThFNhSYoVh7t9rAZxUXT8xuVq0KdnDxaGskoDZrxufN1FNCzTHuUDNo9hm4JaB3y85yK9irnoRzZCh5rH3ny1Dqf2NQasqj8MmSdAkl1p7nQAZ07LOepFVu4qoAHMdthOBwRBUO+D/nRACoE3yzZr0jpn2p2OmcUtEU+d6RwHXOBnLAkduG8tNx0n0mObTnRoWvznYfIVnfOwaSbku4ruGBVinQukyabPxMHJFE7VjQXbh5ZVbzx8j0uOh2nt1Cg1J15mVWFP0IvHx38kV8qQSizq6f/hYwA31dtluog95gZLfwhDdx7vSrI1mOYOssr6soC9QGOHuiNBrSaBdU yOspyBWk 6wiuMtLdiZ6LCA1eXZKamJfE7dL+Lw/kQ/byECF+Mw8gbofhLybHmRvp/i69Fr9tCuCkdE1ZAfD+ifWx8rwDTe+PLxRkW8MN5z24r451C7v1krGEENlP8oLV3FmIeRKXpzyaCE9oFBdC7DgPH0nbkRCZjGs+J/JTjuPd3wRz5YCJsUs6H2A2CbJKKzH80J4yDe2F0172tKXt9fWtQKzsff9jrcJ7lpn8CmF+VdOUM29HQ82CE0dxDHF0ItEA7zXXz9lhD3lepi2mcJcXmMr64QKFaiTb+E18Ha8E8IFVZVSkpbDwhxvnUoTPFhry+iT37vfwx9bprprr3JLlCIg0S2P73TW44OVw6uZGfsS/9Jw1w82kqfd2+IncqJE/41qgmKZ+45U8yDq24Cd8TiaCdtGZALhY2/ns79qu2DNJXCk+3hJZMESXpvmLX/6OI5UKnDRNJsvYxAXQkEb8KM9fkyyOW7DEp2QqMFYwxS55m+U5KCh7LiVjCNE9TwtzgcDaLcbor/dM6KWIAxjay34MDef1mHcdeRWnj/N1qS45+jgspYM3ej7mUZg3HQ+uZsGDiD8LqlmFc1kh0YRkojeU2ZjBIjrofb0S1NxYiPzEBRgp2PoblmVDuQXh85hSHUPU1M+YW7q2wxr2kVFdYKJ1Bjtb0VgwbT+lt8kVKm0A47Z2f8DookOTSGchlXO8sk4A5rvMiwdQk4f3COOt3EZ5AK6G/oAc/l2g1Dby7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 14 Feb 2023, Charan Teja Kalla wrote: > Currently fadvise(2) is supported only for the files that doesn't > associated with noop_backing_dev_info thus for the files, like shmem, > fadvise results into NOP. But then there is file_operations->fadvise() > that lets the file systems to implement their own fadvise > implementation. Use this support to implement some of the POSIX_FADV_XXX > functionality for shmem files. > > This patch aims to implement POSIX_FADV_WILLNEED and POSIX_FADV_DONTNEED > advices to shmem files which can be helpful for the clients who may want > to manage the shmem pages of the files that are created through > shmem_file_setup[_with_mnt](). One usecase is implemented on the > Snapdragon SoC's running Android where the graphics client is allocating > lot of shmem pages per process and pinning them. When this process is > put to background, the instantaneous reclaim is performed on those shmem > pages using the logic implemented downstream[3][4]. With this patch, the > client can now issue the fadvise calls on the shmem files that does the > instantaneous reclaim which can aid the use cases like mentioned above. > > This usecase lead to ~2% reduction in average launch latencies of the > apps and 10% in total number of kills by the low memory killer running > on Android. > > Some questions asked while reviewing this patch: > Q) Can the same thing be achieved with FD mapped to user and use > madvise? > A) All drivers are not mapping all the shmem fd's to user space and want > to manage them with in the kernel. Ex: shmem memory can be mapped to the > other subsystems and they fill in the data and then give it to other > subsystem for further processing, where, the user mapping is not at all > required. A simple example, memory that is given for gpu subsystem > which can be filled directly and give to display subsystem. And the > respective drivers know well about when to keep that memory in ram or > swap based on may be a user activity. > > Q) Should we add the documentation section in Manual pages? > A) The man[1] pages for the fadvise() whatever says is also applicable > for shmem files. so couldn't feel it correct to add specific to shmem > files separately. > > Q) The proposed semantics of POSIX_FADV_DONTNEED is actually similar to > MADV_PAGEOUT and different from MADV_DONTNEED. This is a user facing API > and this difference will cause confusion? > A) man pages [2] says that "POSIX_FADV_DONTNEED attempts to free cached > pages associated with the specified region." This means on issuing this > FADV, it is expected to free the file cache pages. And it is > implementation defined If the dirty pages may be attempted to writeback. > And the unwritten dirty pages will not be freed. So, FADV_DONTNEED also > covers the semantics of MADV_PAGEOUT for file pages and there is no > purpose of PAGEOUT for file pages. > > [1] https://linux.die.net/man/2/fadvise > [2] https://man7.org/linux/man-pages/man2/posix_fadvise.2.html > [3] https://git.codelinaro.org/clo/la/platform/vendor/qcom/opensource/graphics-kernel/-/blob/gfx-kernel.lnx.1.0.r3-rel/kgsl_reclaim.c#L289 > [4] https://android.googlesource.com/kernel/common/+/refs/heads/android12-5.10/mm/shmem.c#4310 > > Signed-off-by: Charan Teja Kalla I'm sorry, but no, this is not yet ready for primetime. I came here expecting to be able just to add a patch on top with small fixes, but see today that it needs more than that, and my time has run out. Though if Andrew is keen to go ahead with it in 6.4, and add fixes on top while it's in rc, that will be okay: except for one small bad bug, which must be fixed immediately - "luckily" nobody appears to be using or testing this since v5, but it cannot go further as is. Willneed is probably fine, but dontneed is not. > --- > mm/shmem.c | 116 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 116 insertions(+) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 448f393..1af8525 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -40,6 +40,9 @@ > #include > #include > #include > +#include > +#include > +#include > #include "swap.h" > > static struct vfsmount *shm_mnt; > @@ -2344,6 +2347,118 @@ static void shmem_set_inode_flags(struct inode *inode, unsigned int fsflags) > #define shmem_initxattrs NULL > #endif > > +static void shmem_isolate_pages_range(struct address_space *mapping, loff_t start, > + loff_t end, struct list_head *list) loff_t? They are pgoff_t. > +{ > + XA_STATE(xas, &mapping->i_pages, start); > + struct folio *folio; > + > + rcu_read_lock(); > + xas_for_each(&xas, folio, end) { > + if (xas_retry(&xas, folio)) > + continue; > + if (xa_is_value(folio)) > + continue; > + > + if (!folio_try_get(folio)) > + continue; > + if (folio_test_unevictable(folio) || folio_mapped(folio) || > + folio_isolate_lru(folio)) { There is the one small bad bug. That should say !folio_isolate_lru(folio). In v5, it was isolate_lru_page(page), because isolate_lru_page() returned 0 for success or -EBUSY for unavailable; whereas folio_isolate_lru(folio) is a boolean, returning true if it successfully removed folio from LRU. The effect of that bug is that in v6 and v7, it has skipped all the folios it was expected to be reclaiming; except when one of them happened to be off LRU for other reasons (being reclaimed elsewhere, being migrated, whatever) - and precisely those folios which were not safe to touch, which have often been transferred to a private worklist, are the ones which the code below goes on to play with - corrupting either or both lists. (I haven't tried to reproduce that in practice, just saw it in the code, and verified with a count that no pages were reclaimed.) > + folio_put(folio); > + continue; > + } > + folio_put(folio); > + > + /* > + * Prepare the folios to be passed to reclaim_pages(). > + * VM can't reclaim a folio unless young bit is > + * cleared in its flags. > + */ > + folio_clear_referenced(folio); > + folio_test_clear_young(folio); > + list_add(&folio->lru, list); > + if (need_resched()) { > + xas_pause(&xas); > + cond_resched_rcu(); > + } > + } > + rcu_read_unlock(); > +} > + > +static int shmem_fadvise_dontneed(struct address_space *mapping, loff_t start, > + loff_t end) loff_t? They are pgoff_t. And why return an int which is always 0? > +{ > + LIST_HEAD(folio_list); > + > + if (!total_swap_pages || mapping_unevictable(mapping)) > + return 0; > + > + lru_add_drain(); > + shmem_isolate_pages_range(mapping, start, end, &folio_list); > + reclaim_pages(&folio_list); > + > + return 0; > +} > + > +static int shmem_fadvise_willneed(struct address_space *mapping, > + pgoff_t start, pgoff_t long end) pgoff_t long? That's a new type to me! Again, why return an int always 0? > +{ > + struct folio *folio; > + pgoff_t index; > + > + xa_for_each_range(&mapping->i_pages, index, folio, start, end) { > + if (!xa_is_value(folio)) > + continue; > + folio = shmem_read_folio(mapping, index); > + if (!IS_ERR(folio)) > + folio_put(folio); > + } > + > + return 0; > +} > + > +static int shmem_fadvise(struct file *file, loff_t offset, loff_t len, int advice) > +{ > + loff_t endbyte; > + pgoff_t start_index; > + pgoff_t end_index; > + struct address_space *mapping; > + struct inode *inode = file_inode(file); > + int ret = 0; > + > + if (S_ISFIFO(inode->i_mode)) > + return -ESPIPE; > + > + mapping = file->f_mapping; > + if (!mapping || len < 0 || !shmem_mapping(mapping)) > + return -EINVAL; > + > + endbyte = fadvise_calc_endbyte(offset, len); > + > + start_index = offset >> PAGE_SHIFT; > + end_index = endbyte >> PAGE_SHIFT; > + switch (advice) { > + case POSIX_FADV_DONTNEED: This is where I ran out of time. I'm afraid all the focus on fadvise_calc_endbyte() has distracted you from looking at the DONTNEED in mm/fadvise.c: where there are detailed comments on why and how it then narrows the DONTNEED range. And aside from needing to duplicate that here for shmem (or put it into another or combined helper), it implies to me that shmem_isolate_pages_range() needs to do a similar narrowing, when it finds that the range overlaps part of a large folio. Something that has crossed my mind as a worry, but I've not had time to look further into (maybe it's no concern at all) is the question of this syscall temporarily isolating a very large number of folios, whether they need to be (or perhaps already are) counted in NR_ISOLATED_ANON, whether too many isolated needs to be limited. > + ret = shmem_fadvise_dontneed(mapping, start_index, end_index); > + break; > + case POSIX_FADV_WILLNEED: > + ret = shmem_fadvise_willneed(mapping, start_index, end_index); > + break; > + case POSIX_FADV_NORMAL: > + case POSIX_FADV_RANDOM: > + case POSIX_FADV_SEQUENTIAL: > + case POSIX_FADV_NOREUSE: > + /* > + * No bad return value, but ignore advice. > + */ > + break; > + default: > + return -EINVAL; > + } > + > + return ret; > +} > + > static struct inode *shmem_get_inode(struct mnt_idmap *idmap, struct super_block *sb, > struct inode *dir, umode_t mode, dev_t dev, > unsigned long flags) > @@ -3942,6 +4057,7 @@ static const struct file_operations shmem_file_operations = { > .splice_write = iter_file_splice_write, > .fallocate = shmem_fallocate, > #endif > + .fadvise = shmem_fadvise, I'd say posix_fadvise() is an operation on an fd, and shmem_fadvise() and all its helpers should be under CONFIG_TMPFS (but oftentimes I do think CONFIG_TMPFS and CONFIG_SHMEM are more trouble than they are worth). Hugh > }; > > static const struct inode_operations shmem_inode_operations = { > -- > 2.7.4