From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C7163630BF for ; Thu, 2 Apr 2026 11:02:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775127728; cv=none; b=bupAt3dJ3+M+kPvNScqtji1hdtm2HkymaqLJnkXxjwxP8x/kPvGYJJi+TKo0uCmYo6X6yjCgNedBEZSUQjuUP1flP1jqyX3ZJd6bAdX1JDp0NhmXNE/gBlZdQYH0S0gRGDFWx03Rm9GwfWbGJ/hqphzX/6hbVxEAE/3qVbNlRes= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775127728; c=relaxed/simple; bh=rXue7LcpJipoMUYMCozk6lH3bTkan5P3GwZfTIZoVyM=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=dJML/uXq8TJwXiI1jLiq1T6drDdgJ7UuBmEF/0iRyYdiOIdjJKZFkAr+SAauPknk4fDr3m7b/2bSwzjqxWmgYXSNJDg/ZPf8dU0+3IW7O0H/C0Fk3ceGu6nWzkZ0CfMy5nEvWYc8W487qkGaGfzo3IZpOtlpiguF4Moey0hlJQc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DBl+zCyn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DBl+zCyn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73C25C116C6; Thu, 2 Apr 2026 11:02:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775127728; bh=rXue7LcpJipoMUYMCozk6lH3bTkan5P3GwZfTIZoVyM=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=DBl+zCynIDOCPxCFBPH2j5QCCLz92m8P47ni7FQrNcjKOJKqo6dROQgdRZk+EuN2p UwDVMYE1cOjZCBSATbTQ04ui7nu4IeNkvy67j8IVPsDtyPE1/aLkj+DHSX3l+xsKMu ++O7kSk8Dl7WHIsRtaz7A2XvnLRzzrZ0TBL2A4tp2KmsXobTbUFPBddgxo0hTEjD4W 1ylVixlDYQd5pbbrlOKmHs6eQd6QWrQctQf4661K7trH5IH+oNdk5E+6FHC4sHnJRV hctguEZ8TabUExxUHClRrYPMMV9bsH30KEbPEYcgg+yNtbPaP11wnwb/DADJ/UQkQX hA0CcXhDRnUcw== From: Pratyush Yadav To: Chenghao Duan Cc: pasha.tatashin@soleen.com, rppt@kernel.org, pratyush@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, jianghaoran@kylinos.cn Subject: Re: [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path In-Reply-To: <20260326084727.118437-3-duanchenghao@kylinos.cn> (Chenghao Duan's message of "Thu, 26 Mar 2026 16:47:22 +0800") References: <20260326084727.118437-1-duanchenghao@kylinos.cn> <20260326084727.118437-3-duanchenghao@kylinos.cn> Date: Thu, 02 Apr 2026 11:02:04 +0000 Message-ID: <2vxz8qb5hbgz.fsf@kernel.org> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Thu, Mar 26 2026, Chenghao Duan wrote: > Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios() > to improve performance when restoring large memfds. > > Currently, shmem_recalc_inode() is called for each folio during restore, > which is O(n) expensive operations. This patch collects the number of > successfully added folios and calls shmem_recalc_inode() once after the > loop completes, reducing complexity to O(1). > > Additionally, fix the error path to also call shmem_recalc_inode() for > the folios that were successfully added before the error occurred. > > Reviewed-by: Pasha Tatashin > Signed-off-by: Chenghao Duan Reviewed-by: Pratyush Yadav BTW, can we also do the same for shmem_inode_acct_blocks() it the call to it can also be aggregated in the same way? You don't have to do it in this series, but possibly as a follow up. [...] -- Regards, Pratyush Yadav