From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BF6D387591 for ; Mon, 30 Mar 2026 16:51:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.179 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774889485; cv=none; b=GdEEWKJxncXAbq4ehv3Ad5Z3wIdUK3S+38OEd0kG5GVy8VAarBgNw89Wxc+Q4ePqTIRI/yreI81I3FRW8WwM6Ce+B7E+7/iP6uJCH9U4rgnolxCiyivCQrDuOGnAbopVoo2urfHSv9sal0x4/SDT32BTIjfPFKv7Dl+eN1wWlXM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774889485; c=relaxed/simple; bh=AvrVFicZ+lSJ5oYUSEgCqUO5uiRp9ViN5shaAFoVAQ4=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=OzWcBptMdwIewddcHrfvBIun8T14DXcCRkWnszh7bbuq8a2zDcc+AZru/r2+a1yZ2uEEGx1E88byhifxcIUNgH9znd0NH12KhO5e2jpPOX0Bqg/KjI6ECGCianyrmX+gxGPn9Uw5BfayLlrnqHOEN+9/9xPXGBO/V/XPLJssF9Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b=ZnITJTA0; arc=none smtp.client-ip=209.85.222.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b="ZnITJTA0" Received: by mail-qk1-f179.google.com with SMTP id af79cd13be357-8cfc085395fso494475985a.2 for ; Mon, 30 Mar 2026 09:51:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg.org; s=google; t=1774889482; x=1775494282; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=rhEVepGrJqxt0GCMsUGUP6tTx35JeZn+m5lnKw+hBAA=; b=ZnITJTA07aKXHwZFC7y9fzCc9VSMOVaPDLBhtmVyCqqFWRJ176lpKpKJ+sv+aYTuQP PpN+Ih0Jbm8hdFmeNndAouprG0amkT9ygKCjKlNHnJGZGMp3exom8eC2q5CUhP2O4uzf JeNQ1rBaeBFh24ytSuboeMAfaySRaOjNUQ4L0IDFEdXuigXMGtafY3JbgLW4YyEMge/B lcJkFZPMy/hXX6KvXxc15dmWZOPQB6d+Ywake61rd6lgApP9Cfgr9rSSDcIoqDDqHUV1 Yxs0iZESiiNn5m44yPtCw9qO8AnzEwtPK3fTYxSW2NBirN1SBNecM5VWc/35/TXe6wDZ uoDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774889482; x=1775494282; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rhEVepGrJqxt0GCMsUGUP6tTx35JeZn+m5lnKw+hBAA=; b=L+MgHLOL4Kp5KAIwdpY0zr/ag89y2gNEvWQE5M8fLkP9YNtz61W0zaJ4vaK7RNPnTy rjQdyyz0PGaudONHQoLh95y3aYo/u64aLyXBXKCezlHDPB4A+TZhL4mBWEwV21Qa5sIz cG02ICxRd5uW11nrhDmoNKjZf99uKte/lD6zpxss7ASqKgIKH2LFZrkot7krFBfILNbF zIEnSueeuY2gD+Jw2gls2a9wAvKpug4aW3ddx5AJQOMOpgPSS/Jx26mzXJ0jI4G6jEOs 7xRFnHHy1eFkgXJNF3NpAi6maRRTNI/g4JVsueWH/cOl3yML5KdRCRZ5vUIYyXUiRBJW xNcA== X-Forwarded-Encrypted: i=1; AJvYcCXJHqFOHZsU+CtSL/+okffDUCbi1jI8g7HcHg/R7ABnvVw3TCU1yguuooUCEkEQANmJSa/isdjK9H1+zLQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yyrr4kN/QNYXbymqJw5MvStym3hargcW3hWeYRqWObd1wfnqvDY YnTjolfFxcGkb/4t6gEkrJIqnWBC4wOfMDwdiNtuntVJccF4Q7fnOOb8NrW0Qo3+r5s= X-Gm-Gg: ATEYQzyFm3cwDsPZFirf18RBQNdMCSmDLrzH5MvqCZib8bD0yluykIunqs5sOKzY+EH F4X6vjfPoToci3Ai7cLT+iLJwSTVidFtZ7zlB1OAU63tJrxtqVdw5lQUnjgBlV+b7yS+BYjcvHm TBFQwE9rIlBzFf7Jc3v+WTJdiYOMTI2ZM+irIbF2GZqkcUoceo4w3U1S+SsZeZqkZ08JpmJl1DI HSeCkUqfPJu7h/SzQVxYUx90haJiURd/U42uUywl1WgWM1gKolJ4kDrFckfsmBKLeisoEGAMjOZ v4/1UQmqjgouJKh892jQBUq1ySWDoXSsXhXrnqC9g2GPUmxauaJI++bK3ifqJfAQLh6Mj+lKImS zoBeAHNkPbi9dVeED1dy5O9JRY3bbnp4fdNMsCyc3PBnVPLwN4U0GY/SH2QDIDEObVsYXEsJzxP BP0e0aUl/eGmyzzHF0beeTZQ== X-Received: by 2002:a05:622a:4085:b0:50b:3893:a2d8 with SMTP id d75a77b69052e-50ba3936836mr173966361cf.63.1774889482030; Mon, 30 Mar 2026 09:51:22 -0700 (PDT) Received: from localhost ([2603:7000:c00:3a00:365a:60ff:fe62:ff29]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-50bb2dd640dsm78570551cf.19.2026.03.30.09.51.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Mar 2026 09:51:21 -0700 (PDT) Date: Mon, 30 Mar 2026 12:51:16 -0400 From: Johannes Weiner To: Kairui Song Cc: Andrew Morton , David Hildenbrand , Shakeel Butt , Yosry Ahmed , Zi Yan , "Liam R. Howlett" , Usama Arif , Kiryl Shutsemau , Dave Chinner , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 7/7] mm: switch deferred split shrinker to list_lru Message-ID: References: <20260318200352.1039011-1-hannes@cmpxchg.org> <20260318200352.1039011-8-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Fri, Mar 27, 2026 at 03:51:07PM +0800, Kairui Song wrote: > On Thu, Mar 19, 2026 at 4:05 AM Johannes Weiner wrote: > > @@ -4651,13 +4651,19 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) > > while (orders) { > > addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > > folio = vma_alloc_folio(gfp, order, vma, addr); > > - if (folio) { > > - if (!mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, > > - gfp, entry)) > > - return folio; > > + if (!folio) > > + goto next; > > + if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, gfp, entry)) { > > count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK_CHARGE); > > folio_put(folio); > > + goto next; > > } > > + if (folio_memcg_list_lru_alloc(folio, &deferred_split_lru, gfp)) { > > + folio_put(folio); > > + goto fallback; > > + } > > Hi Johannes, > > Haven't checked every detail yet, but one question here, might be > trivial, will it be better if we fallback to the next order instead of > fallback to 0 order directly? Suppose this is a 2M allocation and 1M > fallback is allowed, releasing that folio and fallback to 1M will free > 1M memory which would be enough for the list lru metadata to be > allocated. I would be surprised if that mattered. If we can get a 2M folio but fail a couple of small slab requests, there is probably such extreme levels of concurrency and pressure on the freelists that the fault has a good chance of failing altogether and OOMing. And if it doesn't matter, then let's consider it from a code clarity point of view. For folio allocation and charging, we reduce the size to try again. But the list_lru allocation is always the same size - it would look weird to just try again on failure. If we do so based on the logic you lay out above, now it needs a comment too...