public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <darrick.wong@oracle.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 1/7] workqueue: bound maximum queue depth
Date: Tue, 27 Oct 2020 09:29:43 +1100	[thread overview]
Message-ID: <20201026222943.GV7391@dread.disaster.area> (raw)
In-Reply-To: <20201025044114.GA347246@magnolia>

On Sat, Oct 24, 2020 at 09:41:14PM -0700, Darrick J. Wong wrote:
> On Thu, Oct 22, 2020 at 04:15:31PM +1100, Dave Chinner wrote:
> > @@ -140,6 +164,7 @@ workqueue_add(
> >  
> >  	/* Now queue the new work structure to the work queue. */
> >  	pthread_mutex_lock(&wq->lock);
> > +restart:
> >  	if (wq->next_item == NULL) {
> >  		assert(wq->item_count == 0);
> >  		ret = -pthread_cond_signal(&wq->wakeup);
> > @@ -150,6 +175,16 @@ workqueue_add(
> >  		}
> >  		wq->next_item = wi;
> >  	} else {
> > +		/* throttle on a full queue if configured */
> > +		if (wq->max_queued && wq->item_count == wq->max_queued) {
> > +			pthread_cond_wait(&wq->queue_full, &wq->lock);
> 
> I ported xfs_scrub to use max_queued for the inode scanner, and got a
> hang here.  It uses two workqueues -- the first is an unbouned workqueue
> that receives one work item per AG in which each work item calls
> INUMBERS, creates a work item for the returned inode chunk, and throws
> it at the second workqueue.  The second workqueue is a bounded workqueue
> that calls BULKSTAT on the INUMBERS work item and then calls the
> iteration function on each bulkstat record returned.
> 
> The hang happens when the inumbers workqueue has more than one thread
> running.

IIUC, that means you have multiple producer threads? IIRC, he usage
in this patchset is single producer, so it won't hit this problem...

> Both* threads notice the full workqueue and wait on
> queue_full.  One of the workers in the second workqueue goes to pull off
> the next work item, ends up in this if body, signals one of the sleeping
> threads, and starts calling bulkstat.
> 
> In the time it takes to wake up the sleeping thread from wq 1, the
> second workqueue pulls far enough ahead that the single thread from wq1
> never manages to fill wq2 again.  Often, the wq1 thread was sleeping so
> that it could add the last inode chunk of that AG to wq2.  We therefore
> never wake up the *other* sleeping thread from wq1, and the whole app
> stalls.
> 
> I dunno if that's a sane way to structure an inumbers/bulkstat scan, but
> it seemed reasonable to me.  I can envision two possible fixes here: (1)
> use pthread_cond_broadcast to wake everything up; or (2) always call
> pthread_cond_wait when we pull a work item off the queue.  Thoughts?

pthread_cond_broadcast() makes more sense, but I suspect there will
be other issues with multiple producers that render the throttling
ineffective. I suspect supporting multiple producers should be a
separate patchset...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2020-10-26 22:29 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-22  5:15 [PATCH 0/7] repair: Phase 6 performance improvements Dave Chinner
2020-10-22  5:15 ` [PATCH 1/7] workqueue: bound maximum queue depth Dave Chinner
2020-10-22  5:54   ` Darrick J. Wong
2020-10-22  8:11     ` Dave Chinner
2020-10-25  4:41   ` Darrick J. Wong
2020-10-26 22:29     ` Dave Chinner [this message]
2020-10-26 22:40       ` Darrick J. Wong
2020-10-26 22:57         ` Dave Chinner
2020-10-22  5:15 ` [PATCH 2/7] repair: Protect bad inode list with mutex Dave Chinner
2020-10-22  5:45   ` Darrick J. Wong
2020-10-29  9:35   ` Christoph Hellwig
2020-10-22  5:15 ` [PATCH 3/7] repair: protect inode chunk tree records with a mutex Dave Chinner
2020-10-22  6:02   ` Darrick J. Wong
2020-10-22  8:15     ` Dave Chinner
2020-10-29 16:45       ` Darrick J. Wong
2020-10-22  5:15 ` [PATCH 4/7] repair: parallelise phase 6 Dave Chinner
2020-10-22  6:11   ` Darrick J. Wong
2020-10-27  5:10     ` Dave Chinner
2020-10-29 17:20       ` Darrick J. Wong
2020-10-22  5:15 ` [PATCH 5/7] repair: don't duplicate names in " Dave Chinner
2020-10-22  6:21   ` Darrick J. Wong
2020-10-22  8:23     ` Dave Chinner
2020-10-22 15:53       ` Darrick J. Wong
2020-10-29  9:39   ` Christoph Hellwig
2020-10-22  5:15 ` [PATCH 6/7] repair: convert the dir byaddr hash to a radix tree Dave Chinner
2020-10-29 16:41   ` Darrick J. Wong
2020-10-22  5:15 ` [PATCH 7/7] repair: scale duplicate name checking in phase 6 Dave Chinner
2020-10-29 16:29   ` Darrick J. Wong
  -- strict thread matches above, loose matches on Subject: below --
2021-03-19  1:33 [PATCH 0/7] repair: Phase 6 performance improvements Dave Chinner
2021-03-19  1:33 ` [PATCH 1/7] workqueue: bound maximum queue depth Dave Chinner
2021-03-19  1:45   ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201026222943.GV7391@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=darrick.wong@oracle.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox