From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 801A8326957 for ; Tue, 17 Feb 2026 10:38:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771324715; cv=none; b=ZgCkxGchywN1yWWhjG9pjT3X9J7JcbvaJXzHZYrOs9PdEbcx41iRspf0STxXRUoGoGJ5TBwPUoWsNlY8Ilh3JAVgcF8qkJu2aD2SbSPPEqyih+l88BkpvW0MHEenqxk5whhtDkAFlAJKjEP6iAIGw/GwKeQhSFYBbIAto2effZk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771324715; c=relaxed/simple; bh=b2lTnIYAJLe4yjFgG5uGsPClpzLp2piqy3H57BEyaeg=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=SAJP7jYAcPuuxQL1hyJIHm7S7OBFYow6r+6cT9pL0JID7//BQDywkyT0GZkIyfOBBNfyNYWQ+32XyHxsWQqlj61oxju1hpmLfZhOO0LGttnv2IMrjyGnbrQCXuauF6Z2jC3lwi4UVL0AUh5+bMWqvn4GTNL87H2auYWrkLO1++8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=etlqVu25; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="etlqVu25" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 219F4C4CEF7; Tue, 17 Feb 2026 10:38:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771324715; bh=b2lTnIYAJLe4yjFgG5uGsPClpzLp2piqy3H57BEyaeg=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=etlqVu25p6py4s3tgHqvehmXJbfHWLUXTQWu0l9id4rxKG/XT4IgBB9XOKGaR1a54 +tisrA7K1cEDtHb2Ed8tS2hq0v8BWH+13+povr+tFlJk11W4GcP9DLIZlS6eKDz1in 8Erai/K06fq2WP1QyEX/gvPbC8F4EMkpz7IUzEdkOTTEnE6tbjPeU6jN54kY0h3gh/ 269C8HU/qj1EhQxyhu0aii4c9vMjmsiBwX1aRaaz47KVSqyXDoX/na7vzHNMJQ2T7E sEGttv7qmqoNfflsNpxZMkGZF5+O7nVfl81Q1opdCBKGTrK5lHKXCnklWKytGDKH4e +1WPBbVupE0oQ== From: Pratyush Yadav To: Andrew Morton Cc: Pratyush Yadav , Pasha Tatashin , Mike Rapoport , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v2] liveupdate: luo_file: remember retrieve() status In-Reply-To: <20260216134408.12dc6f88f7139054f9e34637@linux-foundation.org> (Andrew Morton's message of "Mon, 16 Feb 2026 13:44:08 -0800") References: <20260216132221.987987-1-pratyush@kernel.org> <20260216134408.12dc6f88f7139054f9e34637@linux-foundation.org> Date: Tue, 17 Feb 2026 11:38:32 +0100 Message-ID: <2vxz342zzmc7.fsf@kernel.org> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Mon, Feb 16 2026, Andrew Morton wrote: > On Mon, 16 Feb 2026 14:22:19 +0100 Pratyush Yadav wrote: > >> From: "Pratyush Yadav (Google)" >> >> LUO keeps track of successful retrieve attempts on a LUO file. It does >> so to avoid multiple retrievals of the same file. Multiple retrievals >> cause problems because once the file is retrieved, the serialized data >> structures are likely freed and the file is likely in a very different >> state from what the code expects. >> >> The retrieve boolean in struct luo_file keeps track of this, and is >> passed to the finish callback so it knows what work was already done and >> what it has left to do. >> >> All this works well when retrieve succeeds. When it fails, >> luo_retrieve_file() returns the error immediately, without ever storing >> anywhere that a retrieve was attempted or what its error code was. This >> results in an errored LIVEUPDATE_SESSION_RETRIEVE_FD ioctl to userspace, >> but nothing prevents it from trying this again. >> >> The retry is problematic for much of the same reasons listed above. The >> file is likely in a very different state than what the retrieve logic >> normally expects, and it might even have freed some serialization data >> structures. Attempting to access them or free them again is going to >> break things. >> >> For example, if memfd managed to restore 8 of its 10 folios, but fails >> on the 9th, a subsequent retrieve attempt will try to call >> kho_restore_folio() on the first folio again, and that will fail with a >> warning since it is an invalid operation. >> >> Apart from the retry, finish() also breaks. Since on failure the >> retrieved bool in luo_file is never touched, the finish() call on >> session close will tell the file handler that retrieve was never >> attempted, and it will try to access or free the data structures that >> might not exist, much in the same way as the retry attempt. >> >> There is no sane way of attempting the retrieve again. Remember the >> error retrieve returned and directly return it on a retry. Also pass >> this status code to finish() so it can make the right decision on the >> work it needs to do. >> >> This is done by changing the bool to an integer. A value of 0 means >> retrieve was never attempted, a positive value means it succeeded, and a >> negative value means it failed and the error code is the value. >> >> Fixes: 7c722a7f44e0 ("liveupdate: luo_file: implement file systems callbacks") > > Should we backport this into 6.19.1? Yes. I keep forgetting that a Fixes tag alone isn't enough for stable backports and I should add Cc: stable@vger.kernel.org too. Please add it to the patch. -- Regards, Pratyush Yadav