* [PATCH 1/4] ext4: Fix deadlock during page writeback [not found] <1466073736-30447-1-git-send-email-jack@suse.cz> @ 2016-06-16 10:42 ` Jan Kara 2016-06-30 15:05 ` Theodore Ts'o 0 siblings, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-06-16 10:42 UTC (permalink / raw) To: Ted Tso; +Cc: linux-ext4, Eryu Guan, Jan Kara, stable Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a deadlock in ext4_writepages() which was previously much harder to hit. After this commit xfstest generic/130 reproduces the deadlock on small filesystems. The problem happens when ext4_do_update_inode() sets LARGE_FILE feature and marks current inode handle as synchronous. That subsequently results in ext4_journal_stop() called from ext4_writepages() to block waiting for transaction commit while still holding page locks, reference to io_end, and some prepared bio in mpd structure each of which can possibly block transaction commit from completing and thus results in deadlock. Fix the problem by releasing page locks, io_end reference, and submitting prepared bio before calling ext4_journal_stop(). Reported-and-tested-by: Eryu Guan <eguan@redhat.com> CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> --- fs/ext4/inode.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index f7140ca66e3b..ba04d57656d4 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2748,13 +2748,27 @@ retry: done = true; } } - ext4_journal_stop(handle); /* Submit prepared bio */ ext4_io_submit(&mpd.io_submit); /* Unlock pages we didn't use */ mpage_release_unused_pages(&mpd, give_up_on_write); - /* Drop our io_end reference we got from init */ - ext4_put_io_end(mpd.io_submit.io_end); + /* + * Drop our io_end reference we got from init. We have to be + * careful and use deferred io_end finishing as we can release + * the last reference to io_end which may end up doing unwritten + * extent conversion which we cannot do while holding + * transaction handle. + */ + ext4_put_io_end_defer(mpd.io_submit.io_end); + /* + * Caution: ext4_journal_stop() can wait for transaction commit + * to finish which may depend on writeback of pages to complete + * or on page lock to be released. So we can call it only + * after we have submitted all the IO, released page locks + * we hold, and dropped io_end reference (for extent conversion + * to be able to complete). + */ + ext4_journal_stop(handle); if (ret == -ENOSPC && sbi->s_journal) { /* -- 2.6.6 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-06-16 10:42 ` [PATCH 1/4] ext4: Fix deadlock during page writeback Jan Kara @ 2016-06-30 15:05 ` Theodore Ts'o 2016-07-01 9:09 ` Jan Kara 0 siblings, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-06-30 15:05 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Thu, Jun 16, 2016 at 12:42:13PM +0200, Jan Kara wrote: > Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a > deadlock in ext4_writepages() which was previously much harder to hit. > After this commit xfstest generic/130 reproduces the deadlock on small > filesystems. > > The problem happens when ext4_do_update_inode() sets LARGE_FILE feature > and marks current inode handle as synchronous. That subsequently results > in ext4_journal_stop() called from ext4_writepages() to block waiting for > transaction commit while still holding page locks, reference to io_end, > and some prepared bio in mpd structure each of which can possibly block > transaction commit from completing and thus results in deadlock. Would it be safe to submit the bio *after* calling ext4_journal_stop()? It looks like that would be safe, and I'd prefer to minimize the time that a handle is open since that can really impact performance when trying to close all existing handles when we are starting commit processing. It looks to me like this would be safe in terms of avoiding deadlocks, yes? - Ted ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-06-30 15:05 ` Theodore Ts'o @ 2016-07-01 9:09 ` Jan Kara 2016-07-01 16:53 ` Theodore Ts'o 0 siblings, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-07-01 9:09 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Thu 30-06-16 11:05:48, Ted Tso wrote: > On Thu, Jun 16, 2016 at 12:42:13PM +0200, Jan Kara wrote: > > Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a > > deadlock in ext4_writepages() which was previously much harder to hit. > > After this commit xfstest generic/130 reproduces the deadlock on small > > filesystems. > > > > The problem happens when ext4_do_update_inode() sets LARGE_FILE feature > > and marks current inode handle as synchronous. That subsequently results > > in ext4_journal_stop() called from ext4_writepages() to block waiting for > > transaction commit while still holding page locks, reference to io_end, > > and some prepared bio in mpd structure each of which can possibly block > > transaction commit from completing and thus results in deadlock. > > Would it be safe to submit the bio *after* calling > ext4_journal_stop()? It looks like that would be safe, and I'd prefer > to minimize the time that a handle is open since that can really > impact performance when trying to close all existing handles when we > are starting commit processing. It looks to me like this would be > safe in terms of avoiding deadlocks, yes? But it is not safe - the bio contains pages, those pages have PageWriteback set and if the inode is part of the running transaction, ext4_journal_stop() will wait for transaction commit which will wait for all outstanding writeback on the inode, which will deadlock on those pages which are part of our unsubmitted bio. So the ordering really has to be the way it is... Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-01 9:09 ` Jan Kara @ 2016-07-01 16:53 ` Theodore Ts'o 2016-07-01 17:40 ` Jan Kara 0 siblings, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-07-01 16:53 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Fri, Jul 01, 2016 at 11:09:50AM +0200, Jan Kara wrote: > But it is not safe - the bio contains pages, those pages have PageWriteback > set and if the inode is part of the running transaction, > ext4_journal_stop() will wait for transaction commit which will wait for > all outstanding writeback on the inode, which will deadlock on those pages > which are part of our unsubmitted bio. So the ordering really has to be the > way it is... So to be clear. the issue is that PageWriteback won't get cleared until we potentially do a uninit->init conversion, and this is what requires taking a transaction handle leading to the other half of the deadlock? ... and it's probably not safe to clear the PageWriteback early in the bio completion callback function, isn't it. Hmm.... - Ted ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-01 16:53 ` Theodore Ts'o @ 2016-07-01 17:40 ` Jan Kara 2016-07-01 21:26 ` Theodore Ts'o 0 siblings, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-07-01 17:40 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Fri 01-07-16 12:53:39, Ted Tso wrote: > On Fri, Jul 01, 2016 at 11:09:50AM +0200, Jan Kara wrote: > > But it is not safe - the bio contains pages, those pages have PageWriteback > > set and if the inode is part of the running transaction, > > ext4_journal_stop() will wait for transaction commit which will wait for > > all outstanding writeback on the inode, which will deadlock on those pages > > which are part of our unsubmitted bio. So the ordering really has to be the > > way it is... > > So to be clear. the issue is that PageWriteback won't get cleared > until we potentially do a uninit->init conversion, and this is what > requires taking a transaction handle leading to the other half of the > deadlock? No. It is even simpler: ext4_writepages(inode == "foobar") prepares pages to write, sets PageWriteback ... mpage_map_and_submit_extent() // Writing data past i_size if (disksize > EXT4_I(inode)->i_disksize) { ... err2 = ext4_mark_inode_dirty(handle, inode); ext4_mark_iloc_dirty(handle, inode, &iloc); ext4_do_update_inode(handle, inode, iloc); // First file beyond 2 GB if (ei->i_disksize > 0x7fffffffULL) { if (!ext4_has_feature_large_file(sb) || ...) set_large_file = 1; } ... if (set_large_file) { ... ext4_handle_sync(handle); ... } ext4_journal_stop() jbd2_journal_stop(handle); ... if (handle->h_sync || ... ) { if (handle->h_sync && !(current->flags & PF_MEMALLOC)) wait_for_commit = 1; if (wait_for_commit) err = jbd2_log_wait_commit(journal, tid); So we are waiting for transaction commit to finish with unsubmitted pages that already have PageWriteback set (and also potentially other pages that are locked and we didn't prepare them for writing because the block mapping we got was too short). Now JBD2 goes on trying to do the transaction commit: jbd2_journal_commit_transaction() ... journal_finish_inode_data_buffers() list_for_each_entry(jinode, &commit_transaction->t_inode_list, i_list) { ... err = filemap_fdatawait(jinode->i_vfs_inode->i_mapping); // And when inode "foobar" is part of this transaction's inode list, this // call is going to wait for PageWriteback bits on all the pages of // the inode to get cleared - which never happens because the IO was // not even submitted for them. The bio is just sitting prepared in // mpd.io_submit in ext4_writepages() and would be submitted once // ext4_journal_stop() completes. Hope it is clearer now. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-01 17:40 ` Jan Kara @ 2016-07-01 21:26 ` Theodore Ts'o 2016-07-04 14:00 ` Jan Kara 2016-07-04 14:14 ` Theodore Ts'o 0 siblings, 2 replies; 19+ messages in thread From: Theodore Ts'o @ 2016-07-01 21:26 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Fri, Jul 01, 2016 at 07:40:41PM +0200, Jan Kara wrote: > > So we are waiting for transaction commit to finish with unsubmitted pages > that already have PageWriteback set (and also potentially other pages that > are locked and we didn't prepare them for writing because the block mapping > we got was too short). Now JBD2 goes on trying to do the transaction > commit: Ah, I see, so this is only an issue in those cases where the handle is synchronous. Is this the only case where there is a concern? (e.g., could we test handle->h_sync and stop the handle early if h_sync is not set?) This would put the uninit->init conversion into potentially a separate transaction, but that should be OK. The reason why I'm pushing so hard here is that long running handles is a major contributor to ext4 haveing poor CPU scalability numbers, since we can end up having lots of threads waiting on the last transaction to complete. So keeping transactions small and fast is a big deal. - Ted ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-01 21:26 ` Theodore Ts'o @ 2016-07-04 14:00 ` Jan Kara 2016-07-04 15:20 ` Theodore Ts'o 2016-07-04 14:14 ` Theodore Ts'o 1 sibling, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-07-04 14:00 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Fri 01-07-16 17:26:34, Ted Tso wrote: > On Fri, Jul 01, 2016 at 07:40:41PM +0200, Jan Kara wrote: > > > > So we are waiting for transaction commit to finish with unsubmitted pages > > that already have PageWriteback set (and also potentially other pages that > > are locked and we didn't prepare them for writing because the block mapping > > we got was too short). Now JBD2 goes on trying to do the transaction > > commit: > > Ah, I see, so this is only an issue in those cases where the handle is > synchronous. Is this the only case where there is a concern? (e.g., > could we test handle->h_sync and stop the handle early if h_sync > is not set?) This would put the uninit->init conversion into > potentially a separate transaction, but that should be OK. So checking handle->h_sync is possible and it would handle the problem as well AFAICS. However I find it rather hacky to rely on the fact that ext4_journal_stop() can block only when handle->h_sync is set. With uninit->init conversion changes you likely mean ext4_put_io_end_defer() is run while the handle is still running - that is true but any real work is done from a workqueue so my patch doesn't really change in which transaction uninit->init conversion happens. > The reason why I'm pushing so hard here is that long running handles > is a major contributor to ext4 haveing poor CPU scalability numbers, > since we can end up having lots of threads waiting on the last > transaction to complete. So keeping transactions small and fast is a > big deal. OK, but we do all the block mappings, page locking etc. while the handle is started so it is not exactly a really short lived handle. The patch adds there a submission of a bio (we have the IO plugged so it will just add the bio to the list of submitted bios), unlock locked pages, drop refcount to ioend (unless IO is already completed, only refcount update is done, if IO is completed we defer any real work to workqueue anyway). So although we add some work which is done while the handle is still running, it is not that much. If you have some tests which show how the transaction wait time increased, I could be convinced the hack is worth it. But so far I don't think that messing with handle->h_sync is warranted. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-04 14:00 ` Jan Kara @ 2016-07-04 15:20 ` Theodore Ts'o 2016-07-04 15:47 ` Jan Kara 0 siblings, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-07-04 15:20 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Mon, Jul 04, 2016 at 04:00:12PM +0200, Jan Kara wrote: > OK, but we do all the block mappings, page locking etc. while the handle is > started so it is not exactly a really short lived handle. The patch adds > there a submission of a bio (we have the IO plugged so it will just add the > bio to the list of submitted bios), unlock locked pages, drop refcount to > ioend (unless IO is already completed, only refcount update is done, if IO > is completed we defer any real work to workqueue anyway). So although we > add some work which is done while the handle is still running, it is not > that much. Good point that the block device is plugged. Ultimately I suspect the way to fix the scalability problem will be move to dioread nolock as the default, and use separate transaction to map the blocks using the uninitialized flags, and then do a separate transaction to convert them afterwards. - Ted ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-04 15:20 ` Theodore Ts'o @ 2016-07-04 15:47 ` Jan Kara 2016-07-05 2:43 ` Theodore Ts'o 0 siblings, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-07-04 15:47 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Mon 04-07-16 11:20:43, Ted Tso wrote: > On Mon, Jul 04, 2016 at 04:00:12PM +0200, Jan Kara wrote: > > OK, but we do all the block mappings, page locking etc. while the handle is > > started so it is not exactly a really short lived handle. The patch adds > > there a submission of a bio (we have the IO plugged so it will just add the > > bio to the list of submitted bios), unlock locked pages, drop refcount to > > ioend (unless IO is already completed, only refcount update is done, if IO > > is completed we defer any real work to workqueue anyway). So although we > > add some work which is done while the handle is still running, it is not > > that much. > > Good point that the block device is plugged. Ultimately I suspect the > way to fix the scalability problem will be move to dioread nolock as > the default, and use separate transaction to map the blocks using the > uninitialized flags, and then do a separate transaction to convert > them afterwards. This is what already happens currently - we only reserve a handle for conversion during writeback but that reservation fluently moves between running transactions until a point where the reserved handle is started - then the handle is pinned to the currently running transaction - and this happens only in the completion handler after IO is completed. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-04 15:47 ` Jan Kara @ 2016-07-05 2:43 ` Theodore Ts'o 2016-07-06 7:04 ` Jan Kara 0 siblings, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-07-05 2:43 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Mon, Jul 04, 2016 at 05:47:09PM +0200, Jan Kara wrote: > > Good point that the block device is plugged. Ultimately I suspect the > > way to fix the scalability problem will be move to dioread nolock as > > the default, and use separate transaction to map the blocks using the > > uninitialized flags, and then do a separate transaction to convert > > them afterwards. > > This is what already happens currently - we only reserve a handle for > conversion during writeback but that reservation fluently moves between > running transactions until a point where the reserved handle is started - > then the handle is pinned to the currently running transaction - and this > happens only in the completion handler after IO is completed. Yes, but dioread_nolock only works with block size == page size. What we need to do is to make it work for all block sizes, then make dioread_nolock the default, and then remove the old direct I/O write path.... - Ted ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-05 2:43 ` Theodore Ts'o @ 2016-07-06 7:04 ` Jan Kara 0 siblings, 0 replies; 19+ messages in thread From: Jan Kara @ 2016-07-06 7:04 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Mon 04-07-16 22:43:30, Ted Tso wrote: > On Mon, Jul 04, 2016 at 05:47:09PM +0200, Jan Kara wrote: > > > Good point that the block device is plugged. Ultimately I suspect the > > > way to fix the scalability problem will be move to dioread nolock as > > > the default, and use separate transaction to map the blocks using the > > > uninitialized flags, and then do a separate transaction to convert > > > them afterwards. > > > > This is what already happens currently - we only reserve a handle for > > conversion during writeback but that reservation fluently moves between > > running transactions until a point where the reserved handle is started - > > then the handle is pinned to the currently running transaction - and this > > happens only in the completion handler after IO is completed. > > Yes, but dioread_nolock only works with block size == page size. What > we need to do is to make it work for all block sizes, then make > dioread_nolock the default, and then remove the old direct I/O write > path.... Yes, I completely agree. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-01 21:26 ` Theodore Ts'o 2016-07-04 14:00 ` Jan Kara @ 2016-07-04 14:14 ` Theodore Ts'o 2016-07-04 15:51 ` Jan Kara 1 sibling, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-07-04 14:14 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable This is what I'm currently testing; do you have objections to this? It will make it harder if we ever want to teach lockdep how to notice problems like this, but we don't at the moment, and it will make a scalability difference in the common case... - Ted >From 646caa9c8e196880b41cd3e3d33a2ebc752bdb85 Mon Sep 17 00:00:00 2001 From: Jan Kara <jack@suse.cz> Date: Mon, 4 Jul 2016 10:14:01 -0400 Subject: [PATCH] ext4: fix deadlock during page writeback Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a deadlock in ext4_writepages() which was previously much harder to hit. After this commit xfstest generic/130 reproduces the deadlock on small filesystems. The problem happens when ext4_do_update_inode() sets LARGE_FILE feature and marks current inode handle as synchronous. That subsequently results in ext4_journal_stop() called from ext4_writepages() to block waiting for transaction commit while still holding page locks, reference to io_end, and some prepared bio in mpd structure each of which can possibly block transaction commit from completing and thus results in deadlock. Fix the problem by releasing page locks, io_end reference, and submitting prepared bio before calling ext4_journal_stop(). [ Changed to defer the call to ext4_journal_stop() only if the handle is synchronous. --tytso ] Reported-and-tested-by: Eryu Guan <eguan@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> --- fs/ext4/inode.c | 29 ++++++++++++++++++++++++++--- 1 file changed, 26 insertions(+), 3 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 44ee5d9..321a31c 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2754,13 +2754,36 @@ retry: done = true; } } - ext4_journal_stop(handle); + /* + * Caution: If the handle is synchronous, + * ext4_journal_stop() can wait for transaction commit + * to finish which may depend on writeback of pages to + * complete or on page lock to be released. In that + * case, we have to wait until after after we have + * submitted all the IO, released page locks we hold, + * and dropped io_end reference (for extent conversion + * to be able to complete) before stopping the handle. + */ + if (!ext4_handle_valid(handle) || handle->h_sync == 0) { + ext4_journal_stop(handle); + handle = NULL; + } /* Submit prepared bio */ ext4_io_submit(&mpd.io_submit); /* Unlock pages we didn't use */ mpage_release_unused_pages(&mpd, give_up_on_write); - /* Drop our io_end reference we got from init */ - ext4_put_io_end(mpd.io_submit.io_end); + /* + * Drop our io_end reference we got from init. We have + * to be careful and use deferred io_end finishing if + * we are still holding the transaction as we can + * release the last reference to io_end which may end + * up doing unwritten extent conversion. + */ + if (handle) { + ext4_put_io_end_defer(mpd.io_submit.io_end); + ext4_journal_stop(handle); + } else + ext4_put_io_end(mpd.io_submit.io_end); if (ret == -ENOSPC && sbi->s_journal) { /* -- 2.5.0 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-04 14:14 ` Theodore Ts'o @ 2016-07-04 15:51 ` Jan Kara 2016-07-05 3:38 ` Theodore Ts'o 0 siblings, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-07-04 15:51 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Mon 04-07-16 10:14:35, Ted Tso wrote: > This is what I'm currently testing; do you have objections to this? Meh, I don't like it but it should work... Did you see any improvement with your change or are you just operating on the assumption that you want as few code while the handle is running as possible? Can you maybe ajust dd a comment that we stop !h_sync handles early to make them run as shortly as possible to improve scalability? Honza > It will make it harder if we ever want to teach lockdep how to notice > problems like this, but we don't at the moment, and it will make a > scalability difference in the common case... > > - Ted > > From 646caa9c8e196880b41cd3e3d33a2ebc752bdb85 Mon Sep 17 00:00:00 2001 > From: Jan Kara <jack@suse.cz> > Date: Mon, 4 Jul 2016 10:14:01 -0400 > Subject: [PATCH] ext4: fix deadlock during page writeback > > Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a > deadlock in ext4_writepages() which was previously much harder to hit. > After this commit xfstest generic/130 reproduces the deadlock on small > filesystems. > > The problem happens when ext4_do_update_inode() sets LARGE_FILE feature > and marks current inode handle as synchronous. That subsequently results > in ext4_journal_stop() called from ext4_writepages() to block waiting for > transaction commit while still holding page locks, reference to io_end, > and some prepared bio in mpd structure each of which can possibly block > transaction commit from completing and thus results in deadlock. > > Fix the problem by releasing page locks, io_end reference, and > submitting prepared bio before calling ext4_journal_stop(). > > [ Changed to defer the call to ext4_journal_stop() only if the handle > is synchronous. --tytso ] > > Reported-and-tested-by: Eryu Guan <eguan@redhat.com> > Signed-off-by: Theodore Ts'o <tytso@mit.edu> > CC: stable@vger.kernel.org > Signed-off-by: Jan Kara <jack@suse.cz> > --- > fs/ext4/inode.c | 29 ++++++++++++++++++++++++++--- > 1 file changed, 26 insertions(+), 3 deletions(-) > > diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c > index 44ee5d9..321a31c 100644 > --- a/fs/ext4/inode.c > +++ b/fs/ext4/inode.c > @@ -2754,13 +2754,36 @@ retry: > done = true; > } > } > - ext4_journal_stop(handle); > + /* > + * Caution: If the handle is synchronous, > + * ext4_journal_stop() can wait for transaction commit > + * to finish which may depend on writeback of pages to > + * complete or on page lock to be released. In that > + * case, we have to wait until after after we have > + * submitted all the IO, released page locks we hold, > + * and dropped io_end reference (for extent conversion > + * to be able to complete) before stopping the handle. > + */ > + if (!ext4_handle_valid(handle) || handle->h_sync == 0) { > + ext4_journal_stop(handle); > + handle = NULL; > + } > /* Submit prepared bio */ > ext4_io_submit(&mpd.io_submit); > /* Unlock pages we didn't use */ > mpage_release_unused_pages(&mpd, give_up_on_write); > - /* Drop our io_end reference we got from init */ > - ext4_put_io_end(mpd.io_submit.io_end); > + /* > + * Drop our io_end reference we got from init. We have > + * to be careful and use deferred io_end finishing if > + * we are still holding the transaction as we can > + * release the last reference to io_end which may end > + * up doing unwritten extent conversion. > + */ > + if (handle) { > + ext4_put_io_end_defer(mpd.io_submit.io_end); > + ext4_journal_stop(handle); > + } else > + ext4_put_io_end(mpd.io_submit.io_end); > > if (ret == -ENOSPC && sbi->s_journal) { > /* > -- > 2.5.0 > -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-04 15:51 ` Jan Kara @ 2016-07-05 3:38 ` Theodore Ts'o 2016-07-06 7:51 ` Jan Kara 0 siblings, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-07-05 3:38 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Mon, Jul 04, 2016 at 05:51:07PM +0200, Jan Kara wrote: > On Mon 04-07-16 10:14:35, Ted Tso wrote: > > This is what I'm currently testing; do you have objections to this? > > Meh, I don't like it but it should work... Did you see any improvement with > your change or are you just operating on the assumption that you want as > few code while the handle is running as possible? I haven't had a chance to try to benchmark it yet. I've working at home over the long (US) holiday weekend, and the high core-count servers I need are on the internal work network, and it's pain to access them from home. I've just been tired of seeing the sort of analysis that can be found at papers like: https://www.usenix.org/system/files/conference/fast14/fast14-paper_kang.pdf (And there's a ATC 2016 paper which shows that things haven't gotten any better as well.) Given that our massive lock bottlenecks come from the j_list_lock and j_state_lock, and that most of the contention happens when we are closing down a transaction for a commit, there is a pretty direct correlation between handle lifetimes and the contention statistics on the journal spinlocks. Enough so that I've instrumented the handle type and handle line number in the jbd2_handle_stats tracepoint, and work to push down on the handle hold times have definitely helped our contention numbers. So I do have experimental evidence that reducing code while the handle is running does matter in general. I just don't have it for this specific case yet.... Cheers, - Ted ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-05 3:38 ` Theodore Ts'o @ 2016-07-06 7:51 ` Jan Kara 2016-07-06 12:35 ` Theodore Ts'o 0 siblings, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-07-06 7:51 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Mon 04-07-16 23:38:24, Ted Tso wrote: > On Mon, Jul 04, 2016 at 05:51:07PM +0200, Jan Kara wrote: > > On Mon 04-07-16 10:14:35, Ted Tso wrote: > > > This is what I'm currently testing; do you have objections to this? > > > > Meh, I don't like it but it should work... Did you see any improvement with > > your change or are you just operating on the assumption that you want as > > few code while the handle is running as possible? > > I haven't had a chance to try to benchmark it yet. I've working at > home over the long (US) holiday weekend, and the high core-count > servers I need are on the internal work network, and it's pain to > access them from home. > > I've just been tired of seeing the sort of analysis that can be found > at papers like: > > https://www.usenix.org/system/files/conference/fast14/fast14-paper_kang.pdf So the biggest gap shown in this paper is for buffered write where I suspect ext4 suffers because it starts a handle for each write. There is some low-hanging fruit though - we just need to start it when we may be updating i_size. I'll try to look into this when I have time to setup proper benchmark. > (And there's a ATC 2016 paper which shows that things haven't gotten > any better as well.) > > Given that our massive lock bottlenecks come from the j_list_lock and > j_state_lock, and that most of the contention happens when we are > closing down a transaction for a commit, there is a pretty direct > correlation between handle lifetimes and the contention statistics on > the journal spinlocks. Enough so that I've instrumented the handle > type and handle line number in the jbd2_handle_stats tracepoint, and > work to push down on the handle hold times have definitely helped our > contention numbers. Yeah, JBD2 scalability sucks. I suspect you are conflating two issues here though. One issue is j_list_lock and j_state_lock contention - that is exposed by starting handles often, doing lots of operations with buffers etc. This is what the above paper shows. Another issue is that while a transaction is preparing for commit, we have to wait for all outstanding handles against that transaction and while we do that, we have no running transaction and the whole journalling machinery is stalled. For this problem, the time each handle runs is essential. This is what you've likely seen in your testing. Reducing j_list_lock and j_state_lock contention is IMO doable, although the low hanging fruit is probably eaten these days ;). Fixing the second problem is harder as that is inherent problem with block-level journalling. I suspect we could allow starting another transaction while the previous one is in "preparing for commit" phase but that would lead to two transactions getting updates at one point in time which JBD2 currently does not expect. > So I do have experimental evidence that reducing code while the handle > is running does matter in general. I just don't have it for this > specific case yet.... OK. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-06 7:51 ` Jan Kara @ 2016-07-06 12:35 ` Theodore Ts'o 2016-07-06 12:52 ` Jan Kara 0 siblings, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-07-06 12:35 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Wed, Jul 06, 2016 at 09:51:16AM +0200, Jan Kara wrote: > > Yeah, JBD2 scalability sucks. I suspect you are conflating two issues here > though. One issue is j_list_lock and j_state_lock contention - that is > exposed by starting handles often, doing lots of operations with buffers > etc. This is what the above paper shows. Another issue is that while a > transaction is preparing for commit, we have to wait for all outstanding > handles against that transaction and while we do that, we have no running > transaction and the whole journalling machinery is stalled. For this > problem, the time each handle runs is essential. This is what you've likely > seen in your testing. You're right, I'm conflating two separate issues. The j_{list,state}_lock contention is the more obvious of the two, but it's separate from the issue of the journalling machinery being stalled on j_wait_transaction_locked. > > Reducing j_list_lock and j_state_lock contention is IMO doable, although > the low hanging fruit is probably eaten these days ;). Yeah, most of the low hanging fruit has been grabbed already, which is why I tend to focus more on the 2nd issue these days. The main thing which is left would be splitting j_list_lock so there are separate locks for each of the different lists (t_buffers, t_forget, t_shadow_list, t_reserved_list, etc.) What makes this tricky is that when we are moving blocks from one list to another, we need to have both lists locked, and so this means rearchitecting a large amount of the locking in fs/jbd2/transaction.c, and of course, worrying about lock rank issues. > Fixing the second problem is harder as that is inherent problem with > block-level journalling. I suspect we could allow starting another > transaction while the previous one is in "preparing for commit" > phase but that would lead to two transactions getting updates at one > point in time which JBD2 currently does not expect. Starting another transaction while we are waiting for earlier transaction to lock down is going to be problematic, since while there are still handles active on the first transaction, they could still be modifying metadata blocks. And while that's happening, we can't allow any new handles associated with the second transaction to start modifying metadata blocks. If there was some way for all of the currently open handles to guarantee that they won't call get_write_access() on any new blocks, maybe. But if you look at truncate for example, that gets messy --- and we could get most of the benefit by simply making truncate be a two part operation, where it identifies all of the blocks it needs to modify and makes sure they are in memory *before* it calls start_this_handle. And then this falls into the general design principle of keeping the run time of handles as short as possible. - Ted ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-06 12:35 ` Theodore Ts'o @ 2016-07-06 12:52 ` Jan Kara 2016-07-06 14:27 ` Theodore Ts'o 0 siblings, 1 reply; 19+ messages in thread From: Jan Kara @ 2016-07-06 12:52 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Wed 06-07-16 08:35:10, Ted Tso wrote: > On Wed, Jul 06, 2016 at 09:51:16AM +0200, Jan Kara wrote: > > Fixing the second problem is harder as that is inherent problem with > > block-level journalling. I suspect we could allow starting another > > transaction while the previous one is in "preparing for commit" > > phase but that would lead to two transactions getting updates at one > > point in time which JBD2 currently does not expect. > > Starting another transaction while we are waiting for earlier > transaction to lock down is going to be problematic, since while there > are still handles active on the first transaction, they could still be > modifying metadata blocks. And while that's happening, we can't allow > any new handles associated with the second transaction to start > modifying metadata blocks. Well, we can. We just have to make sure we snapshot the contents that should be committed before we modify it from the new transaction. We already do this when we are committing block and need to modify it in the running transaction at the same time. Obviously allowing this logic to trigger earlier will lead to higher memory overhead and allocation, copying, and freeing of block snapshots isn't free either so it will need careful benchmarking. > If there was some way for all of the currently open handles to > guarantee that they won't call get_write_access() on any new blocks, > maybe. But if you look at truncate for example, that gets messy --- > and we could get most of the benefit by simply making truncate be a > two part operation, where it identifies all of the blocks it needs to > modify and makes sure they are in memory *before* it calls > start_this_handle. And then this falls into the general design > principle of keeping the run time of handles as short as possible. Yeah, I'm afraid the complexity of this will be rather high... Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-06 12:52 ` Jan Kara @ 2016-07-06 14:27 ` Theodore Ts'o 2016-07-06 14:41 ` Jan Kara 0 siblings, 1 reply; 19+ messages in thread From: Theodore Ts'o @ 2016-07-06 14:27 UTC (permalink / raw) To: Jan Kara; +Cc: linux-ext4, Eryu Guan, stable On Wed, Jul 06, 2016 at 02:52:28PM +0200, Jan Kara wrote: > > Starting another transaction while we are waiting for earlier > > transaction to lock down is going to be problematic, since while there > > are still handles active on the first transaction, they could still be > > modifying metadata blocks. And while that's happening, we can't allow > > any new handles associated with the second transaction to start > > modifying metadata blocks. > > Well, we can. We just have to make sure we snapshot the contents that > should be committed before we modify it from the new transaction. We > already do this when we are committing block and need to modify it in the > running transaction at the same time. Obviously allowing this logic to > trigger earlier will lead to higher memory overhead and allocation, > copying, and freeing of block snapshots isn't free either so it will need > careful benchmarking. Consider the following sequence: Start handle A attached to txn #42 <Start Commiting transaction #42> Start handle B attached to tnx #43 Call get_write_access on block bitmap #100 Modify block bitmap #100 journal_dirty_metadata for #100 Call get_write_access on block bitmap #100 Modify block bitmap #100 journal_dirty_metadata for #100 Snapshotting the block bitmap at when handle B calls get_write_access() won't help, because if handle B starts modifying the block bitmap, and *then* handle A starts trying to modify the same block bitmap, what do we do? You could make handle A make the same logical modification in both the copy of metadata block associated with first transaction (#42) as well as the copy of the metadata block associated with the second transaction (#43), and for an allocation bitmap maybe it's even doable. But consider the even more hairy case where handle A and handle B are both modifying an inline xattr, and handle B has to convert spill some of the extended attribute contents to an external xattr block. Now when handle A makes some other xattr change, the change it needs to make for transaction #42 might be very different from the one for transaction #43. The complexity for handling this would be extremely high, and I suspect doing a two-pass truncate would actually be simpler.... - Ted > > If there was some way for all of the currently open handles to > > guarantee that they won't call get_write_access() on any new blocks, > > maybe. But if you look at truncate for example, that gets messy --- > > and we could get most of the benefit by simply making truncate be a > > two part operation, where it identifies all of the blocks it needs to > > modify and makes sure they are in memory *before* it calls > > start_this_handle. And then this falls into the general design > > principle of keeping the run time of handles as short as possible. > > Yeah, I'm afraid the complexity of this will be rather high... > > Honza > > -- > Jan Kara <jack@suse.com> > SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] ext4: Fix deadlock during page writeback 2016-07-06 14:27 ` Theodore Ts'o @ 2016-07-06 14:41 ` Jan Kara 0 siblings, 0 replies; 19+ messages in thread From: Jan Kara @ 2016-07-06 14:41 UTC (permalink / raw) To: Theodore Ts'o; +Cc: Jan Kara, linux-ext4, Eryu Guan, stable On Wed 06-07-16 10:27:23, Ted Tso wrote: > On Wed, Jul 06, 2016 at 02:52:28PM +0200, Jan Kara wrote: > > > Starting another transaction while we are waiting for earlier > > > transaction to lock down is going to be problematic, since while there > > > are still handles active on the first transaction, they could still be > > > modifying metadata blocks. And while that's happening, we can't allow > > > any new handles associated with the second transaction to start > > > modifying metadata blocks. > > > > Well, we can. We just have to make sure we snapshot the contents that > > should be committed before we modify it from the new transaction. We > > already do this when we are committing block and need to modify it in the > > running transaction at the same time. Obviously allowing this logic to > > trigger earlier will lead to higher memory overhead and allocation, > > copying, and freeing of block snapshots isn't free either so it will need > > careful benchmarking. > > Consider the following sequence: > > Start handle A attached to txn #42 > > <Start Commiting transaction #42> > > Start handle B attached to tnx #43 > Call get_write_access on block bitmap #100 > Modify block bitmap #100 > journal_dirty_metadata for #100 > > Call get_write_access on block bitmap #100 > Modify block bitmap #100 > journal_dirty_metadata for #100 > > > Snapshotting the block bitmap at when handle B calls > get_write_access() won't help, because if handle B starts modifying > the block bitmap, and *then* handle A starts trying to modify the same > block bitmap, what do we do? > > You could make handle A make the same logical modification in both the > copy of metadata block associated with first transaction (#42) as well > as the copy of the metadata block associated with the second > transaction (#43), and for an allocation bitmap maybe it's even > doable. > > But consider the even more hairy case where handle A and handle B are > both modifying an inline xattr, and handle B has to convert spill some > of the extended attribute contents to an external xattr block. Now > when handle A makes some other xattr change, the change it needs to > make for transaction #42 might be very different from the one for > transaction #43. Yup, good point. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2016-07-06 14:42 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1466073736-30447-1-git-send-email-jack@suse.cz>
2016-06-16 10:42 ` [PATCH 1/4] ext4: Fix deadlock during page writeback Jan Kara
2016-06-30 15:05 ` Theodore Ts'o
2016-07-01 9:09 ` Jan Kara
2016-07-01 16:53 ` Theodore Ts'o
2016-07-01 17:40 ` Jan Kara
2016-07-01 21:26 ` Theodore Ts'o
2016-07-04 14:00 ` Jan Kara
2016-07-04 15:20 ` Theodore Ts'o
2016-07-04 15:47 ` Jan Kara
2016-07-05 2:43 ` Theodore Ts'o
2016-07-06 7:04 ` Jan Kara
2016-07-04 14:14 ` Theodore Ts'o
2016-07-04 15:51 ` Jan Kara
2016-07-05 3:38 ` Theodore Ts'o
2016-07-06 7:51 ` Jan Kara
2016-07-06 12:35 ` Theodore Ts'o
2016-07-06 12:52 ` Jan Kara
2016-07-06 14:27 ` Theodore Ts'o
2016-07-06 14:41 ` Jan Kara
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).