linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] Btrfs: fix num_start_workers count if we fail to make an alloc
@ 2011-11-18 19:38 Josef Bacik
  2011-11-18 20:20 ` Al Viro
  0 siblings, 1 reply; 4+ messages in thread
From: Josef Bacik @ 2011-11-18 19:38 UTC (permalink / raw)
  To: linux-btrfs, viro

Al pointed out that if we fail to start a worker for whatever reason (ENOMEM
basically), we could leak our count for num_start_workers, and so we'd think we
had more workers than we actually do.  This could cause us to shrink workers
when we shouldn't or not start workers when we should.  So check the return
value and if we failed fix num_start_workers and fallback.  Thanks,

Signed-off-by: Josef Bacik <josef@redhat.com>
---
 fs/btrfs/async-thread.c |    9 ++++++++-
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c
index 7ec1409..09ef1b0 100644
--- a/fs/btrfs/async-thread.c
+++ b/fs/btrfs/async-thread.c
@@ -568,6 +568,7 @@ static struct btrfs_worker_thread *find_worker(struct btrfs_workers *workers)
 	struct btrfs_worker_thread *worker;
 	unsigned long flags;
 	struct list_head *fallback;
+	int ret;
 
 again:
 	spin_lock_irqsave(&workers->lock, flags);
@@ -584,7 +585,13 @@ again:
 			workers->num_workers_starting++;
 			spin_unlock_irqrestore(&workers->lock, flags);
 			/* we're below the limit, start another worker */
-			__btrfs_start_workers(workers, 1);
+			ret = __btrfs_start_workers(workers, 1);
+			if (ret) {
+				spin_lock_irqsave(&workers->lock, flags);
+				workers->num_workers_starting--;
+				spin_unlock_irqrestore(&workers->lock, flags);
+				goto fallback;
+			}
 			goto again;
 		}
 	}
-- 
1.7.5.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] Btrfs: fix num_start_workers count if we fail to make an alloc
  2011-11-18 19:38 [PATCH] Btrfs: fix num_start_workers count if we fail to make an alloc Josef Bacik
@ 2011-11-18 20:20 ` Al Viro
  2011-11-19  1:37   ` Al Viro
  0 siblings, 1 reply; 4+ messages in thread
From: Al Viro @ 2011-11-18 20:20 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs

On Fri, Nov 18, 2011 at 02:38:54PM -0500, Josef Bacik wrote:
> Al pointed out that if we fail to start a worker for whatever reason (ENOMEM
> basically), we could leak our count for num_start_workers, and so we'd think we
> had more workers than we actually do.  This could cause us to shrink workers
> when we shouldn't or not start workers when we should.  So check the return
> value and if we failed fix num_start_workers and fallback.  Thanks,

It's actually uglier than that; consider check_pending_workers_create()
where we
	* bump the num_start_workers
	* call start_new_worker(), which can fail, and then we have the same
leak; if it doesn't fail, it schedules a call of start_new_worker_func()
	* when start_new_worker_func() runs, it does btrfs_start_workers(),
which can run into the same leak again (this time on another pool - one
we have as ->async_helper).

Worse, __btrfs_start_workers() does btrfs_stop_workers() on failure.  That,
to put it mildly, is using excessive force.  As far as I can see, it's
_never_ the right thing to do - __btrfs_start_workers() is always getting
1 as the second argument, so even calls from mount path don't need that
kind of "kill ones we'd already created if we fail halfway through".  It
used to make sense when they had all been started at mount time, but now
it's useless in the best case (mount) and destructive elsewhere (when
pool had already been non-empty).

So I'd suggest killing that call of btrfs_stop_workers() completely, losing
the num_workers argument (along with a loop in __btrfs_start_workers())
and looking into check_pending_workers_create() path.

Probably I'd put decrement on ->num_workers_starting into failure exits of
__btrfs_start_workers() and start_new_worker(), but... can btrfs_queue_worker()
ever return non-zero?  AFAICS it can't and we could just merge
start_new_worker() into check_pending_workers() and pull allocation before
incrementing the ->num_workers_starting...

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] Btrfs: fix num_start_workers count if we fail to make an alloc
  2011-11-18 20:20 ` Al Viro
@ 2011-11-19  1:37   ` Al Viro
  2011-11-19  2:12     ` Al Viro
  0 siblings, 1 reply; 4+ messages in thread
From: Al Viro @ 2011-11-19  1:37 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs

On Fri, Nov 18, 2011 at 08:20:56PM +0000, Al Viro wrote:
> On Fri, Nov 18, 2011 at 02:38:54PM -0500, Josef Bacik wrote:
> > Al pointed out that if we fail to start a worker for whatever reason (ENOMEM
> > basically), we could leak our count for num_start_workers, and so we'd think we
> > had more workers than we actually do.  This could cause us to shrink workers
> > when we shouldn't or not start workers when we should.  So check the return
> > value and if we failed fix num_start_workers and fallback.  Thanks,
> 
> It's actually uglier than that; consider check_pending_workers_create()
> where we
> 	* bump the num_start_workers
> 	* call start_new_worker(), which can fail, and then we have the same
> leak; if it doesn't fail, it schedules a call of start_new_worker_func()
> 	* when start_new_worker_func() runs, it does btrfs_start_workers(),
> which can run into the same leak again (this time on another pool - one
> we have as ->async_helper).

Nuts...  AFAICS, we _always_ leak ->num_start_workers here (i.e. when
check_pending_workers_create() finds ->atomic_start_pending set).  In
effect, we bump it once in check_pending_workers_create() itself, then
another time (on the same pool) when start_new_worker_func() calls
btrfs_start_workers().  That one will be dropped when we manage to 
start the thread, but the first one won't.

Shouldn't we use __btrfs_start_workers() instead here?

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] Btrfs: fix num_start_workers count if we fail to make an alloc
  2011-11-19  1:37   ` Al Viro
@ 2011-11-19  2:12     ` Al Viro
  0 siblings, 0 replies; 4+ messages in thread
From: Al Viro @ 2011-11-19  2:12 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs

On Sat, Nov 19, 2011 at 01:37:39AM +0000, Al Viro wrote:
> On Fri, Nov 18, 2011 at 08:20:56PM +0000, Al Viro wrote:
> > On Fri, Nov 18, 2011 at 02:38:54PM -0500, Josef Bacik wrote:
> > > Al pointed out that if we fail to start a worker for whatever reason (ENOMEM
> > > basically), we could leak our count for num_start_workers, and so we'd think we
> > > had more workers than we actually do.  This could cause us to shrink workers
> > > when we shouldn't or not start workers when we should.  So check the return
> > > value and if we failed fix num_start_workers and fallback.  Thanks,
> > 
> > It's actually uglier than that; consider check_pending_workers_create()
> > where we
> > 	* bump the num_start_workers
> > 	* call start_new_worker(), which can fail, and then we have the same
> > leak; if it doesn't fail, it schedules a call of start_new_worker_func()
> > 	* when start_new_worker_func() runs, it does btrfs_start_workers(),
> > which can run into the same leak again (this time on another pool - one
> > we have as ->async_helper).
> 
> Nuts...  AFAICS, we _always_ leak ->num_start_workers here (i.e. when
> check_pending_workers_create() finds ->atomic_start_pending set).  In
> effect, we bump it once in check_pending_workers_create() itself, then
> another time (on the same pool) when start_new_worker_func() calls
> btrfs_start_workers().  That one will be dropped when we manage to 
> start the thread, but the first one won't.
> 
> Shouldn't we use __btrfs_start_workers() instead here?

OK, tentative fixes for that stuff are pushed into #btrfs in vfs.git; comments
would be welcome...

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-11-19  2:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-18 19:38 [PATCH] Btrfs: fix num_start_workers count if we fail to make an alloc Josef Bacik
2011-11-18 20:20 ` Al Viro
2011-11-19  1:37   ` Al Viro
2011-11-19  2:12     ` Al Viro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).