linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [GIT PULL] udf and quota fixes for 6.18-rc1
@ 2025-10-01 11:29 Jan Kara
  2025-10-03 20:43 ` Linus Torvalds
  2025-10-03 21:31 ` pr-tracker-bot
  0 siblings, 2 replies; 6+ messages in thread
From: Jan Kara @ 2025-10-01 11:29 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-fsdevel

  Hello Linus,

  could you please pull from

git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs.git fs_for_v6.18-rc1

to get a fix for UDF and quota.

Top of the tree is 3bd5e45c2ce3. The full shortlog is:

Larshin Sergey (1):
      fs: udf: fix OOB read in lengthAllocDescs handling

Shashank A P (1):
      fs: quota: create dedicated workqueue for quota_release_work

The diffstat is

 fs/quota/dquot.c | 10 +++++++++-
 fs/udf/inode.c   |  3 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

							Thanks
								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [GIT PULL] udf and quota fixes for 6.18-rc1
  2025-10-01 11:29 [GIT PULL] udf and quota fixes for 6.18-rc1 Jan Kara
@ 2025-10-03 20:43 ` Linus Torvalds
  2025-10-03 21:48   ` Tejun Heo
  2025-10-03 21:31 ` pr-tracker-bot
  1 sibling, 1 reply; 6+ messages in thread
From: Linus Torvalds @ 2025-10-03 20:43 UTC (permalink / raw)
  To: Jan Kara, Tejun Heo; +Cc: linux-fsdevel

On Wed, 1 Oct 2025 at 04:29, Jan Kara <jack@suse.cz> wrote:
>
> Shashank A P (1):
>       fs: quota: create dedicated workqueue for quota_release_work

I've pulled this, but I do wonder why we have so many of these
occasional workqueues that seem to make so little sense.

Could we perhaps just add a system workqueue for reclaim? Instead of
having tons of individual workqueues that all exist mainly just for
that single reason (and I think they all end up also getting a
"rescuer" worker too?)

Tejun, comments?

              Linus

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [GIT PULL] udf and quota fixes for 6.18-rc1
  2025-10-01 11:29 [GIT PULL] udf and quota fixes for 6.18-rc1 Jan Kara
  2025-10-03 20:43 ` Linus Torvalds
@ 2025-10-03 21:31 ` pr-tracker-bot
  1 sibling, 0 replies; 6+ messages in thread
From: pr-tracker-bot @ 2025-10-03 21:31 UTC (permalink / raw)
  To: Jan Kara; +Cc: Linus Torvalds, linux-fsdevel

The pull request you sent on Wed, 1 Oct 2025 13:29:14 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs.git fs_for_v6.18-rc1

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/a4eb9356480fa47618e597a43284c52ac6023f28

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [GIT PULL] udf and quota fixes for 6.18-rc1
  2025-10-03 20:43 ` Linus Torvalds
@ 2025-10-03 21:48   ` Tejun Heo
  2025-10-03 22:08     ` Linus Torvalds
  0 siblings, 1 reply; 6+ messages in thread
From: Tejun Heo @ 2025-10-03 21:48 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Jan Kara, linux-fsdevel

Hello,

On Fri, Oct 03, 2025 at 01:43:20PM -0700, Linus Torvalds wrote:
> On Wed, 1 Oct 2025 at 04:29, Jan Kara <jack@suse.cz> wrote:
> >
> > Shashank A P (1):
> >       fs: quota: create dedicated workqueue for quota_release_work
> 
> I've pulled this, but I do wonder why we have so many of these
> occasional workqueues that seem to make so little sense.
> 
> Could we perhaps just add a system workqueue for reclaim? Instead of
> having tons of individual workqueues that all exist mainly just for
> that single reason (and I think they all end up also getting a
> "rescuer" worker too?)

Usually the problem is that a WQ_MEM_RECLAIM workqueue can only guarantee
forward progress of a single work item at any moment. If there are two work
items where one depends on the other to make forward progress, putting those
two on the same workqueue will lead to deadlocks under memory pressure.

So, two subsystems can share a WQ_MEM_RECLAIM workqueue iff two two are
guaranteed to not stack. If e.g. ext4 uses quota and if an ext4 work item
can wait for quot_release_work() to finish, then putting them on the same
WQ_MEM_RECLAIM will lead to a dead lock.

One thing we can improve is how these workqueues are initialized. Maybe we
can lazy-init the rescuer so that we don't end up with a bunch of rescuer
threads that are never used. A lot of subsystems end up creating
WQ_MEM_RECLAIM workqueues whether it actually ends up getting used or not.
It'd be nice if we can just tie rescuer creation to the first work being
issued but we might already need forward progress guarantee at that point.
Don't see a nice way out yet. Will think more about it.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [GIT PULL] udf and quota fixes for 6.18-rc1
  2025-10-03 21:48   ` Tejun Heo
@ 2025-10-03 22:08     ` Linus Torvalds
  2025-10-06 11:11       ` Jan Kara
  0 siblings, 1 reply; 6+ messages in thread
From: Linus Torvalds @ 2025-10-03 22:08 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Jan Kara, linux-fsdevel

On Fri, 3 Oct 2025 at 14:48, Tejun Heo <tj@kernel.org> wrote:
>
> So, two subsystems can share a WQ_MEM_RECLAIM workqueue iff two two are
> guaranteed to not stack. If e.g. ext4 uses quota and if an ext4 work item
> can wait for quot_release_work() to finish, then putting them on the same
> WQ_MEM_RECLAIM will lead to a dead lock.

Yes. However, in my experience - and this may be limited and buggy, so
take that with a large pinch of salt - a number of these things are
not the kind that waits for work, but more of a "fire off and forget".

So for example, the new quota user obviously ends up doing quota
writebacks (->write_dquot), and in the process may need to get
filesystem locks etc.

So it will certainly block and wait for other things.

And yes, it's not *entirely* a "fire-off and forget" situation: people
will obviously wait for it occasionally.

But they'll wait for it in things like the 'sync()' path, which had
better not hold any locks anyway.

So the quota case was perfectly happy using the system wq - except for
the whole "WQ_MEM_RECLAIM" issue.

And I *think* that's the common case.

This is when Jan might pipe up and tell me I'm very wrong and entirely
misread the whole issue.

Jan?

           Linus

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [GIT PULL] udf and quota fixes for 6.18-rc1
  2025-10-03 22:08     ` Linus Torvalds
@ 2025-10-06 11:11       ` Jan Kara
  0 siblings, 0 replies; 6+ messages in thread
From: Jan Kara @ 2025-10-06 11:11 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Tejun Heo, Jan Kara, linux-fsdevel

On Fri 03-10-25 15:08:14, Linus Torvalds wrote:
> On Fri, 3 Oct 2025 at 14:48, Tejun Heo <tj@kernel.org> wrote:
> >
> > So, two subsystems can share a WQ_MEM_RECLAIM workqueue iff two two are
> > guaranteed to not stack. If e.g. ext4 uses quota and if an ext4 work item
> > can wait for quot_release_work() to finish, then putting them on the same
> > WQ_MEM_RECLAIM will lead to a dead lock.
> 
> Yes. However, in my experience - and this may be limited and buggy, so
> take that with a large pinch of salt - a number of these things are
> not the kind that waits for work, but more of a "fire off and forget".
> 
> So for example, the new quota user obviously ends up doing quota
> writebacks (->write_dquot), and in the process may need to get
> filesystem locks etc.
> 
> So it will certainly block and wait for other things.
> 
> And yes, it's not *entirely* a "fire-off and forget" situation: people
> will obviously wait for it occasionally.
> 
> But they'll wait for it in things like the 'sync()' path, which had
> better not hold any locks anyway.
> 
> So the quota case was perfectly happy using the system wq - except for
> the whole "WQ_MEM_RECLAIM" issue.

Generally, I agree people are not waiting for dquot freeing. But there are
some corner cases where they can - e.g. if freeing of dquot races with
someone trying to grab new reference to it through dqget(). Then dqget()
has to wait for freeing to complete. It is these corner cases where
usually syzbot manages to find some unexpected dependencies like in the
case this patch was fixing.

If we take the example this patch is fixing - writeback work ends up
depending on quota release work so to guarantee forward progress we need
separate workqueues for them. Quota release work may end up waiting for
some work in the filesystem to which we are writing back the quota
information. So again if quota release work would be using the "generic"
reclaim workqueue, none of these filesystem works could be using it. So I
tend to agree with Tejun that it seems somewhat fragile to have a generic
reclaim workqueue if we want to absolutely guarantee forward progress
without having to allocate new worker.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-10-06 11:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-01 11:29 [GIT PULL] udf and quota fixes for 6.18-rc1 Jan Kara
2025-10-03 20:43 ` Linus Torvalds
2025-10-03 21:48   ` Tejun Heo
2025-10-03 22:08     ` Linus Torvalds
2025-10-06 11:11       ` Jan Kara
2025-10-03 21:31 ` pr-tracker-bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).