From: Luis Henriques <luis@igalia.com>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
Bernd Schubert <bernd@bsbernd.com>,
Teng Qin <tqin@jumptrading.com>,
Matt Harvey <mharvey@jumptrading.com>
Subject: Re: [RFC PATCH v2] fuse: fix race in fuse_notify_store()
Date: Mon, 24 Feb 2025 14:30:17 +0000 [thread overview]
Message-ID: <87tt8j4dqe.fsf@igalia.com> (raw)
In-Reply-To: <CAJfpegsrGO25sJe1GQBVe=Ea5jhkpr7WjpQOHKxkL=gJTk+y8g@mail.gmail.com> (Miklos Szeredi's message of "Mon, 24 Feb 2025 14:36:17 +0100")
On Mon, Feb 24 2025, Miklos Szeredi wrote:
> On Thu, 30 Jan 2025 at 11:16, Luis Henriques <luis@igalia.com> wrote:
>>
>> Userspace filesystems can push data for a specific inode without it being
>> explicitly requested. This can be accomplished by using NOTIFY_STORE.
>> However, this may race against another process performing different
>> operations on the same inode.
>>
>> If, for example, there is a process reading from it, it may happen that it
>> will block waiting for data to be available (locking the folio), while the
>> FUSE server will also block trying to lock the same folio to update it with
>> the inode data.
>>
>> The easiest solution, as suggested by Miklos, is to allow the userspace
>> filesystem to skip locked folios.
>
> Not sure.
>
> The easiest solution is to make the server perform the two operations
> independently. I.e. never trigger a notification from a request.
>
> This is true of other notifications, e.g. doing FUSE_NOTIFY_DELETE
> during e.g. FUSE_RMDIR will deadlock on i_mutex.
Hmmm... OK, the NOTIFY_DELETE and NOTIFY_INVAL_ENTRY deadlocks are
documented (in libfuse, at least). So, maybe this one could be added to
the list of notifications that could deadlock. However, IMHO, it would be
great if this could be fixed instead.
> Or am I misunderstanding the problem?
I believe the initial report[1] actually adds a specific use-case where
the deadlock can happen when the server performs the two operations
independently. For example:
- An application reads 4K of data at offset 0
- The server gets a read request. It performs the read, and gets more
data than the data requested (say 4M)
- It caches this data in userspace and replies to VFS with 4K of data
- The server does a notify_store with the reminder data
- In the meantime the userspace application reads more 4K at offset 4K
The last 2 operations can race and the server may deadlock if the
application already has locked the page where data will be read into.
Does it make sense?
[1] https://lore.kernel.org/CH2PR14MB41040692ABC50334F500789ED6C89@CH2PR14MB4104.namprd14.prod.outlook.com
Cheers,
--
Luís
next prev parent reply other threads:[~2025-02-24 14:30 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-30 10:16 [RFC PATCH v2] fuse: fix race in fuse_notify_store() Luis Henriques
2025-02-21 17:40 ` Luis Henriques
2025-02-24 13:36 ` Miklos Szeredi
2025-02-24 14:30 ` Luis Henriques [this message]
2025-02-24 14:39 ` Miklos Szeredi
2025-02-25 10:37 ` Luis Henriques
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87tt8j4dqe.fsf@igalia.com \
--to=luis@igalia.com \
--cc=bernd@bsbernd.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mharvey@jumptrading.com \
--cc=miklos@szeredi.hu \
--cc=tqin@jumptrading.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox