linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH RFC 04/29] fsverity: add per-sb workqueue for post read processing
       [not found] ` <20250728-fsverity-v1-4-9e5443af0e34@kernel.org>
@ 2025-08-11 11:45   ` Christoph Hellwig
  2025-08-11 17:51     ` Tejun Heo
  0 siblings, 1 reply; 4+ messages in thread
From: Christoph Hellwig @ 2025-08-11 11:45 UTC (permalink / raw)
  To: Andrey Albershteyn
  Cc: fsverity, linux-fsdevel, linux-xfs, david, djwong, ebiggers, hch,
	Tejun Heo, Lai Jiangshan, linux-kernel

On Mon, Jul 28, 2025 at 10:30:08PM +0200, Andrey Albershteyn wrote:
> From: Andrey Albershteyn <aalbersh@redhat.com>
> 
> For XFS, fsverity's global workqueue is not really suitable due to:
> 
> 1. High priority workqueues are used within XFS to ensure that data
>    IO completion cannot stall processing of journal IO completions.
>    Hence using a WQ_HIGHPRI workqueue directly in the user data IO
>    path is a potential filesystem livelock/deadlock vector.

Do they?  I though the whole point of WQ_HIGHPRI was that they'd
have separate rescue workers to avoid any global pool effects.

> 2. The fsverity workqueue is global - it creates a cross-filesystem
>    contention point.

How does this not affect the other file systems?

If the global workqueue is such an issue, maybe it should be addressed
in an initial series before the xfs support?


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH RFC 04/29] fsverity: add per-sb workqueue for post read processing
  2025-08-11 11:45   ` [PATCH RFC 04/29] fsverity: add per-sb workqueue for post read processing Christoph Hellwig
@ 2025-08-11 17:51     ` Tejun Heo
  2025-08-12  7:43       ` Christoph Hellwig
  0 siblings, 1 reply; 4+ messages in thread
From: Tejun Heo @ 2025-08-11 17:51 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrey Albershteyn, fsverity, linux-fsdevel, linux-xfs, david,
	djwong, ebiggers, Lai Jiangshan, linux-kernel

Hello,

On Mon, Aug 11, 2025 at 01:45:19PM +0200, Christoph Hellwig wrote:
> On Mon, Jul 28, 2025 at 10:30:08PM +0200, Andrey Albershteyn wrote:
> > From: Andrey Albershteyn <aalbersh@redhat.com>
> > 
> > For XFS, fsverity's global workqueue is not really suitable due to:
> > 
> > 1. High priority workqueues are used within XFS to ensure that data
> >    IO completion cannot stall processing of journal IO completions.
> >    Hence using a WQ_HIGHPRI workqueue directly in the user data IO
> >    path is a potential filesystem livelock/deadlock vector.
> 
> Do they?  I though the whole point of WQ_HIGHPRI was that they'd
> have separate rescue workers to avoid any global pool effects.

HIGHPRI and MEM_RECLAIM are orthogonal. HIGHPRI makes the workqueue use
worker pools with high priority, so all work items would execute at MIN_NICE
(-20). Hmm... actually, rescuer doesn't set priority according to the
workqueue's, which seems buggy.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH RFC 04/29] fsverity: add per-sb workqueue for post read processing
  2025-08-11 17:51     ` Tejun Heo
@ 2025-08-12  7:43       ` Christoph Hellwig
  2025-08-12 19:52         ` Tejun Heo
  0 siblings, 1 reply; 4+ messages in thread
From: Christoph Hellwig @ 2025-08-12  7:43 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Christoph Hellwig, Andrey Albershteyn, fsverity, linux-fsdevel,
	linux-xfs, david, djwong, ebiggers, Lai Jiangshan, linux-kernel

> On Mon, Aug 11, 2025 at 01:45:19PM +0200, Christoph Hellwig wrote:
> > On Mon, Jul 28, 2025 at 10:30:08PM +0200, Andrey Albershteyn wrote:
> > > From: Andrey Albershteyn <aalbersh@redhat.com>
> > > 
> > > For XFS, fsverity's global workqueue is not really suitable due to:
> > > 
> > > 1. High priority workqueues are used within XFS to ensure that data
> > >    IO completion cannot stall processing of journal IO completions.
> > >    Hence using a WQ_HIGHPRI workqueue directly in the user data IO
> > >    path is a potential filesystem livelock/deadlock vector.
> > 
> > Do they?  I though the whole point of WQ_HIGHPRI was that they'd
> > have separate rescue workers to avoid any global pool effects.
> 
> HIGHPRI and MEM_RECLAIM are orthogonal. HIGHPRI makes the workqueue use
> worker pools with high priority, so all work items would execute at MIN_NICE
> (-20). Hmm... actually, rescuer doesn't set priority according to the
> workqueue's, which seems buggy.

Andrey (or others involved with previous versions):  is interference
with the log completion workqueue what you ran into?

Tejun, are you going to prepare a patch to fix the rescuer priority?


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH RFC 04/29] fsverity: add per-sb workqueue for post read processing
  2025-08-12  7:43       ` Christoph Hellwig
@ 2025-08-12 19:52         ` Tejun Heo
  0 siblings, 0 replies; 4+ messages in thread
From: Tejun Heo @ 2025-08-12 19:52 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrey Albershteyn, fsverity, linux-fsdevel, linux-xfs, david,
	djwong, ebiggers, Lai Jiangshan, linux-kernel

Hello,

On Tue, Aug 12, 2025 at 09:43:50AM +0200, Christoph Hellwig wrote:
...
> Andrey (or others involved with previous versions):  is interference
> with the log completion workqueue what you ran into?
> 
> Tejun, are you going to prepare a patch to fix the rescuer priority?

NVM, I was confused. All rescuers, regardless of the associated workqueue,
set their nice level to MIN_NICE. IIRC, the rationale was that by the time
rescuer triggers the queued work items already have experienced noticeable
latencies and that rescuer invocations would be pretty rare. I'd be
surprised if rescuer behavior is showing up as easily observable
interferences in most cases. The system should already be thrashing quite a
bit for rescuers to be active and whatever noise rescuer behavior might
cause should usually be drowned by other things.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-08-12 19:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20250728-fsverity-v1-0-9e5443af0e34@kernel.org>
     [not found] ` <20250728-fsverity-v1-4-9e5443af0e34@kernel.org>
2025-08-11 11:45   ` [PATCH RFC 04/29] fsverity: add per-sb workqueue for post read processing Christoph Hellwig
2025-08-11 17:51     ` Tejun Heo
2025-08-12  7:43       ` Christoph Hellwig
2025-08-12 19:52         ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).