From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-183.mta1.migadu.com (out-183.mta1.migadu.com [95.215.58.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 435CF2EC081 for ; Thu, 23 Apr 2026 23:46:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.183 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776987975; cv=none; b=rr4D9Yk/+UaptjtOsnpXSMa+0QFPwNxBPC0zxH6wiEIarqWvgPXyWeEWCZ3Dpp81mKXE6Avd60mJYkLWmZrf4xzWzNlMAovpQAdRIiEjWx4uQfKQ9Aryx/ydXlQ56H9FkqaORwA0u6ZgxpRxpxX9JS3TXvd/D1DRy2eK0FqIv4A= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776987975; c=relaxed/simple; bh=GODqgUj3UTmmBwaK8fGKWc/epw0J4/Y1kA3jzWZ9Mig=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=oHu/79uzw1jdUJBJe/FJSZ+yMaGAKQJOsNsrpp3BbM9VDlfcO73jYJW5k38TRwxqW8Hf9SydUeh/kfpEZNh5XcCHSEsRWU56ZEokydYw7+HYaQskfRVG0Aug5c1MIHsJFUkfn9YP8qy+d+8XSAWsbkLPyHTXaGgOq1eECR/k4vs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=AMML6Q+v; arc=none smtp.client-ip=95.215.58.183 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="AMML6Q+v" Date: Thu, 23 Apr 2026 16:46:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1776987971; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tCkvae6g7ai7pPQ78mGryOJeSvrBW2xtP+VRuCL0vXo=; b=AMML6Q+v7d0G4tjIWrU1IVM5E5jAz1aUtmfSGSBpjfHhUdCdlkPV5tCAN1pAq2I+o0IPd3 bXJuyJa9AAKi0GrCY1L32eNput9CWUwjLqJeg0vKjjSa7jQqGXgeiD7AdN2mZ9MrOg0gjM IXKuPgukR3Kwf29XrIoxJy3mrJTiARs= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Shakeel Butt To: Barry Song Cc: "JP Kobryn (Meta)" , linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@kernel.org, mhocko@suse.com, willy@infradead.org, hannes@cmpxchg.org, riel@surriel.com, chrisl@kernel.org, kasong@tencent.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, qi.zheng@linux.dev, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: Re: [PATCH] mm/lruvec: preemptively free dead folios during lru_add drain Message-ID: References: <20260423164307.29805-1-jp.kobryn@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Migadu-Flow: FLOW_OUT On Fri, Apr 24, 2026 at 07:22:30AM +0800, Barry Song wrote: > On Fri, Apr 24, 2026 at 12:43 AM JP Kobryn (Meta) wrote: > > > > Of all observable lruvec lock contention in our fleet, we find that ~24% > > occurs when dead folios are present in lru_add batches at drain time. This > > is wasteful in the sense that the folio is added to the LRU just to be > > immediately removed via folios_put_refs(), incurring two unnecessary lock > > acquisitions. > > > > Eliminate this overhead by preemptively cleaning up dead folios before they > > make it into the LRU. Use folio_ref_freeze() to filter folios whose only > > remaining refcount is the batch ref. When dead folios are found, move them > > off the add batch and onto a temporary batch to be freed. > > > > During A/B testing on one of our prod instagram workloads (high-frequency > > short-lived requests), the patch intercepted almost all dead folios before > > they entered the LRU. Data collected using the mm_lru_insertion tracepoint > > shows the effectiveness of the patch: > > > > Per-host LRU add averages at 95% CPU load > > (60 hosts each side, 3 x 60s intervals) > > > > dead folios/min total folios/min dead % > > unpatched: 1,297,785 19,341,986 6.7097% > > patched: 14 19,039,996 0.0001% > > > > Within this workload, we save ~2.6M lock acquisitions per minute per host > > as a result. > > > > System-wide memory stats improved on the patched side also at 95% CPU load: > > - direct reclaim scanning reduced 7% > > - allocation stalls reduced 5.2% > > - compaction stalls reduced 12.3% > > - page frees reduced 4.9% > > > > No regressions were observed in requests served per second or request tail > > latency (p99). Both metrics showed directional improvement at higher CPU > > utilization (comparing 85% to 95%). > > > > Signed-off-by: JP Kobryn (Meta) > > Hi JP, > I’m seeing a large number of "BAD page" bugs. > Not sure if it’s related, but reverting this patch > seems to fix the issue. > > [ 2869.365978] BUG: Bad page state in process uname pfn:3a5417 > [ 2869.365981] page: refcount:0 mapcount:0 mapping:0000000000000000 > index:0x724884c20 pfn:0x3a5417 > [ 2869.365983] flags: > 0x17ffffc0020908(uptodate|active|owner_2|swapbacked|node=0|zone=2|lastcpupid=0x1fffff) Hi Barry, are you using MGLRU? It seems like MGLRU set active flag in folio_add_lru(). JP, we need to clean active flag but let's check what else can be set before folio_add_lru().