From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E23F1D555 for ; Thu, 23 Jan 2025 17:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737653022; cv=none; b=fX5X5867d2HFViW6tFI3cWHNeeqQTN8Yle7V6UsAV6t7g8VykzQ7TQEVRBSmZQqKdtFO7m8TsP3fjwOwdIMWR3CMMv+BWDa4v5wVyuVYi97MmmSYGLj3CUYejM7LZT3V+LmdXtt58y7oE/L7EMjnDu96s2YowxpbcDb0AAjLfrM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737653022; c=relaxed/simple; bh=WSGwFPBtiLTqUrWMgoe1SN07Qvnq6kl7Z7BKYnrHREg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YPZJFPRyzlUuN17WFXt68R7MGjxjG7PkCvXY/O2HvxP+0hzGzdhtICBmMD3Ci7ntUax9/tO3KzALhJW4ZHWW+d8JOKiUiZ97f+LS4/IA/V1ZRQKjmtEtzb4mNTOV+wzZg/zweXFG4GHKcwOAt499ryc+YSMkVkcQMUO4kXCJDss= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=E291hYDD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="E291hYDD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF919C4CED3; Thu, 23 Jan 2025 17:23:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737653021; bh=WSGwFPBtiLTqUrWMgoe1SN07Qvnq6kl7Z7BKYnrHREg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E291hYDDt266iKId2+/0Ulc+8XlwYwBRK/bWLoreluA1QtuEg1IIO/3JIO33FAD10 LjzMB2Hv+QRFiD4ZVQDLXUVc+zUmgZwBSUgDTupD7LPdpcW65lcLWQ3nZF45ULPsUK DQOkexmAF21ViEpHQsQJbSlzsooB45EUH6gTEpPuuIUDtvzqp3HKJHfV8qbmpGoo7c U6aSfq1bdHwEm0n5h9YSd9/C5Z2L7gkw69h3PDpHUJMIVdBoUDMylDxqScVijCymhF R7MPuPl8vdbOLQFActr16eomNhWUND5Vd/Yr01kG15wKHY+5iaFubYFCmSdJPpYS3F OcxooJXU1nZiw== From: SeongJae Park To: Vinay Banakar Cc: SeongJae Park , Bharata B Rao , linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, willy@infradead.org, mgorman@suse.de, Wei Xu , Greg Thelen Subject: Re: [PATCH] mm: Optimize TLB flushes during page reclaim Date: Thu, 23 Jan 2025 09:23:38 -0800 Message-Id: <20250123172338.53472-1-sj@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit On Thu, 23 Jan 2025 11:11:13 -0600 Vinay Banakar wrote: > On Wed, Jan 22, 2025 at 2:05 PM SeongJae Park wrote: > > damon_pa_pageout() from mm/damon/paddr.c also calls shrink_folio_list() similar > > to madvise.c, but it doesn't aware such batching behavior. Have you checked > > that path? > > Thanks for catching this path. In damon_pa_pageout(), > shrink_folio_list() processes all pages from a single NUMA node that > were collected (filtered) from a single DAMON region (r->ar.start to > r->ar.end). This means it could be processing anywhere from 1 page up > to ULONG_MAX pages from a single node at once. Thank you Vinay. That's same to my understanding, except that it is not limited to a single NUMA node. A region can have any start and end physical addresses, so it could cover memory of different NUMA nodes. > With the patch, we'll > send a single IPI for TLB flush for the entire region, reducing IPIs > by a factor equal to the number of pages being reclaimed by DAMON at > once (decided by damon_reclaim_quota). I guess the fact that the pages could belong to differnt NUMA nodes doesn't make difference here? > > My only concern here would be the overhead of maintaining the > temporary pageout_list for batching. However, during BIO submission, > the patch checks if the folio was reactivated, so submitting to BIO in > bulk should be safe. > > Another option would be to modify shrink_folio_list() to force batch > flushes for up to N pages (512) at a time, rather than relying on > callers to do the batching via folio_list. Both sounds good to me :) Thanks, SJ > > Thanks! > Vinay