From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-oi1-f169.google.com (mail-oi1-f169.google.com [209.85.167.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B476646AEC5 for ; Fri, 6 Mar 2026 20:58:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.169 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772830714; cv=none; b=H3DHfMY/iCfWdptmOP/29Aa/jV81X2ysc37DF4qwiJXc6ZihK6S+gTNctyhk8gCcfAqPYDK0Y4JItSahhhH6KOWN5I2Rv+ec9L84JggzlzvLBiLtmJH7YzxQ8KA0DfR6LwXe0LLrSxNyJee/j90H/KLrLAL4ZXgqOrMlPOfo1VY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772830714; c=relaxed/simple; bh=TGDFmSbOj5yHzVE1Lv/hxdXVeXrNKi2LthImHD5XGhU=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=uO8Orvwzg3HYf9fj0+u3PzCbETXfDT92f0OszwLO77ap9YQ1mBLaAxPDBEa/2Cn1nyYO2TA2b8QC+LktKUy8KObxBBJQ4lAZooEc/5cAwvWr6Qd0yHHYhU52SXr6t+LsPKC2pkue4iNjVLL9cG8MC/SzeQiL+OARLo8GI5u0bfQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com; spf=pass smtp.mailfrom=cloudflare.com; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b=AIZ2lOpl; arc=none smtp.client-ip=209.85.167.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cloudflare.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cloudflare.com header.i=@cloudflare.com header.b="AIZ2lOpl" Received: by mail-oi1-f169.google.com with SMTP id 5614622812f47-464ba2bb3aeso7154872b6e.1 for ; Fri, 06 Mar 2026 12:58:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google09082023; t=1772830712; x=1773435512; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=FNZo5NfTBFF4m06mr7OPW9uWWiG3NXNEdjw76uLifcY=; b=AIZ2lOplIhDx7lhCS3+HE8shA1ITX3BxSLa5VtjLvU08+QmyvkteZqa29R72TR4ug6 U2KIdSOsF9Po9lmLvyNsBVfwLASQgV0+dYBeSytIu6LDbYcjidBajLME9pf/mtwlALSG +jYfw0z71pzt2wpLmFrfswViGr9PYkChm23CAWmVZHVFIQk1sL98eaStKPPlmrOlGf5T wwxu1vU4pt1R6TuxZOni1eeFTmmJhztNjyRE1gE32HFQGPBShb/ZwbapA6n5cdtNFTC9 UeNw93y7KrxUD6kI9mjUFuc/VzbEoOUvOPliW2I9RjLw4kQG7h/2cLnCxIno0u092xUm SNuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772830712; x=1773435512; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FNZo5NfTBFF4m06mr7OPW9uWWiG3NXNEdjw76uLifcY=; b=OiZsN/07MOpW2GwzstyzLhTwi21MzMP3B8TAjj9P9sqcxX8vP5Mf9k/qUO0BtAeBqU ZfXwLWbawFJKaqKLs7UTCd01xYj12XrZ2//as91MHmRTD4JRYenayQzo4q0r0DCeP+DI okRQJdecp8lN4tIC0NVxDI1lm3d0pm3ZwxRgAkJ0Gr2Pgbn8F/AsqkeXoStnV7xurwfU Yx0ztp+hKrNs2qV0BJ20t3UlSVU/hFilfhnSnDMZOLLqvpSejyFYaYR6Opec7EJ1iRsY RLhPE9AgWZMhNh7Z8R7c71ad2zK5zP6SPYs+9odnpVi5co95U6ZpY24M0pqR/AqJaaFF 1rrA== X-Forwarded-Encrypted: i=1; AJvYcCX1w8/Qb/fhn2+9IKYw3UcOfawex5JvpG0Cgr4r01Xo8RiC+c9OGXffFOJmWh6MUj0weTipd8oWXO0WsrY=@vger.kernel.org X-Gm-Message-State: AOJu0YxlKuoeo+gewPEYfnxpAqmR9+IcS8oHDq4jIGZHYDqO398ncoNV E45Vr1XUOyKhFLTz0l1A4WDO2QoDsUww/9Iw/gZMbabk+IVmvBpayis1Xqok0w0UZKI= X-Gm-Gg: ATEYQzw1Zp9Ky58BOJbf7+g+xJF+i7JNKZ7ryAHCNSTHpjKJhDNZnazwwKsddQqTNKP ux+dakq5mUVfRcYwnU6tvwWzjL69bXtulvcTN1k6uQJWWB51E63bLMfVLiZtdYk2KDATKL+sJB3 UODB1NmsagAH3cLAUBMeVBw8/eAlIg/Y4cB5Q/WjQ/LbePSbPdU9RxD8ugS+JHaTDhCUYuUUuM3 CoVUxJT+w46ktIiMySkytl40/Z+w+C3ThDgotu5K7aMh6ypNo8/rm2wEKVZ81jHxqTEed2JOSgG PR6AroyF61Kuh83xPZmXzWbfZJqstqjtCu1fe8K5P5+CBkrptRaBqFsiRf032Hh6Ka6Mo4s6sGO LbnPGApZ8Z8kli2BoXfVp3Nh3TWZOsQWVDWmJDIj8GtnlAdY98rDc9Al6qWXsi5F4NZoqEh+dlI 6igqmXiQ== X-Received: by 2002:a05:6808:1b29:b0:463:a42c:503a with SMTP id 5614622812f47-466dd0f8eeemr1809910b6e.14.1772830711654; Fri, 06 Mar 2026 12:58:31 -0800 (PST) Received: from 20HS2G4 ([2a09:bac1:76c0:540::3ce:23]) by smtp.gmail.com with ESMTPSA id 5614622812f47-466dfa96903sm1318842b6e.13.2026.03.06.12.58.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Mar 2026 12:58:30 -0800 (PST) Date: Fri, 6 Mar 2026 14:58:27 -0600 From: Chris Arges To: Kiryl Shutsemau Cc: Matthew Wilcox , akpm@linux-foundation.org, william.kucharski@oracle.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@cloudflare.com Subject: Re: [PATCH RFC 1/1] mm/filemap: handle large folio split race in page cache lookups Message-ID: References: <20260305183438.1062312-1-carges@cloudflare.com> <20260305183438.1062312-2-carges@cloudflare.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On 2026-03-06 20:21:59, Kiryl Shutsemau wrote: > On Fri, Mar 06, 2026 at 02:11:22PM -0600, Chris Arges wrote: > > On 2026-03-06 16:28:19, Matthew Wilcox wrote: > > > On Fri, Mar 06, 2026 at 02:13:26PM +0000, Kiryl Shutsemau wrote: > > > > On Thu, Mar 05, 2026 at 07:24:38PM +0000, Matthew Wilcox wrote: > > > > > folio_split() needs to be sure that it's the only one holding a reference > > > > > to the folio. To that end, it calculates the expected refcount of the > > > > > folio, and freezes it (sets the refcount to 0 if the refcount is the > > > > > expected value). Once filemap_get_entry() has incremented the refcount, > > > > > freezing will fail. > > > > > > > > > > But of course, we can race. filemap_get_entry() can load a folio first, > > > > > the entire folio_split can happen, then it calls folio_try_get() and > > > > > succeeds, but it no longer covers the index we were looking for. That's > > > > > what the xas_reload() is trying to prevent -- if the index is for a > > > > > folio which has changed, then the xas_reload() should come back with a > > > > > different folio and we goto repeat. > > > > > > > > > > So how did we get through this with a reference to the wrong folio? > > > > > > > > What would xas_reload() return if we raced with split and index pointed > > > > to a tail page before the split? > > > > > > > > Wouldn't it return the folio that was a head and check will pass? > > > > > > It's not supposed to return the head in this case. But, check the code: > > > > > > if (!node) > > > return xa_head(xas->xa); > > > if (IS_ENABLED(CONFIG_XARRAY_MULTI)) { > > > offset = (xas->xa_index >> node->shift) & XA_CHUNK_MASK; > > > entry = xa_entry(xas->xa, node, offset); > > > if (!xa_is_sibling(entry)) > > > return entry; > > > offset = xa_to_sibling(entry); > > > } > > > return xa_entry(xas->xa, node, offset); > > > > > > (obviously CONFIG_XARRAY_MULTI is enabled) > > > > > Yes we have this CONFIG enabled. > > > > Also FWIW, happy to run some additional experiments or more debugging. We _can_ > > reproduce this, as a machine hits this about every day on a sample of ~128 > > machines. We also do get crashdumps so we can poke around there as needed. > > > > I was going to deploy this patch onto a subset of machines, but reading through > > this thread I'm a bit concerned if a retry doesn't actually fix the problem, > > then we will just loop on this condition and hang. > > I would be useful to know if the condition is persistent or if retry > "fixes" the problem. Fair enough. I suppose it's either crashing or locking up. Will deploy early next week and see what happens. --chris