* [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq.
@ 2010-05-28 6:22 Tao Ma
2010-05-28 9:12 ` Wengang Wang
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Tao Ma @ 2010-05-28 6:22 UTC (permalink / raw)
To: ocfs2-devel
We used to let orphan scan work in the default work queue,
but there is a corner case which will make the system deadlock.
The scenario is like this:
1. set heartbeat threadshold to 200. this will allow us to have a
great chance to have a orphan scan work before our quorum decision.
2. mount node 1.
3. after 1~2 minutes, mount node 2(in order to make the bug easier
to reproduce, better add maxcpus=1 to kernel command line).
4. node 1 do orphan scan work.
5. node 2 do orphan scan work.
6. node 1 do orphan scan work. After this, node 1 hold the orphan scan
lock while node 2 know node 1 is the master.
7. ifdown eth2 in node 2(eth2 is what we do ocfs2 interconnection).
Now when node 2 begins orphan scan, the system queue is blocked.
The root cause is that both orphan scan work and quorum decision work
will use the system event work queue. orphan scan has a chance of
blocking the event work queue(in dlm_wait_for_node_death) so that there
is no chance for quorum decision work to proceed.
This patch resolve it by moving orphan scan work to ocfs2_wq.
Signed-off-by: Tao Ma <tao.ma@oracle.com>
---
fs/ocfs2/journal.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
index 57e3fef..e02788f 100644
--- a/fs/ocfs2/journal.c
+++ b/fs/ocfs2/journal.c
@@ -1938,7 +1938,7 @@ void ocfs2_orphan_scan_work(struct work_struct *work)
mutex_lock(&os->os_lock);
ocfs2_queue_orphan_scan(osb);
if (atomic_read(&os->os_state) == ORPHAN_SCAN_ACTIVE)
- schedule_delayed_work(&os->os_orphan_scan_work,
+ queue_delayed_work(ocfs2_wq, &os->os_orphan_scan_work,
ocfs2_orphan_scan_timeout());
mutex_unlock(&os->os_lock);
}
@@ -1978,8 +1978,8 @@ void ocfs2_orphan_scan_start(struct ocfs2_super *osb)
atomic_set(&os->os_state, ORPHAN_SCAN_INACTIVE);
else {
atomic_set(&os->os_state, ORPHAN_SCAN_ACTIVE);
- schedule_delayed_work(&os->os_orphan_scan_work,
- ocfs2_orphan_scan_timeout());
+ queue_delayed_work(ocfs2_wq, &os->os_orphan_scan_work,
+ ocfs2_orphan_scan_timeout());
}
}
--
1.5.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq.
2010-05-28 6:22 [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq Tao Ma
@ 2010-05-28 9:12 ` Wengang Wang
2010-05-28 9:21 ` Wengang Wang
2010-06-01 22:07 ` Sunil Mushran
2010-06-15 23:47 ` Joel Becker
2 siblings, 1 reply; 5+ messages in thread
From: Wengang Wang @ 2010-05-28 9:12 UTC (permalink / raw)
To: ocfs2-devel
Hi Tao,
Checking the workers in ocfs2_wq, I found osb_truncate_log_wq. The worker
function ocfs2_truncate_log_worker() can with the following call trace.
ocfs2_truncate_log_worker()
ocfs2_flush_truncate_log()
__ocfs2_flush_truncate_log()
ocfs2_inode_lock()
So I think during ocfs2_inode_lock(), we still have the possibility that
it hang the work queue at dlm_wait_for_node_death().
So maybe a dedicated work queue is the only choice?
regards,
wengang.
On 10-05-28 14:22, Tao Ma wrote:
> We used to let orphan scan work in the default work queue,
> but there is a corner case which will make the system deadlock.
> The scenario is like this:
> 1. set heartbeat threadshold to 200. this will allow us to have a
> great chance to have a orphan scan work before our quorum decision.
> 2. mount node 1.
> 3. after 1~2 minutes, mount node 2(in order to make the bug easier
> to reproduce, better add maxcpus=1 to kernel command line).
> 4. node 1 do orphan scan work.
> 5. node 2 do orphan scan work.
> 6. node 1 do orphan scan work. After this, node 1 hold the orphan scan
> lock while node 2 know node 1 is the master.
> 7. ifdown eth2 in node 2(eth2 is what we do ocfs2 interconnection).
>
> Now when node 2 begins orphan scan, the system queue is blocked.
>
> The root cause is that both orphan scan work and quorum decision work
> will use the system event work queue. orphan scan has a chance of
> blocking the event work queue(in dlm_wait_for_node_death) so that there
> is no chance for quorum decision work to proceed.
>
> This patch resolve it by moving orphan scan work to ocfs2_wq.
>
> Signed-off-by: Tao Ma <tao.ma@oracle.com>
> ---
> fs/ocfs2/journal.c | 6 +++---
> 1 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
> index 57e3fef..e02788f 100644
> --- a/fs/ocfs2/journal.c
> +++ b/fs/ocfs2/journal.c
> @@ -1938,7 +1938,7 @@ void ocfs2_orphan_scan_work(struct work_struct *work)
> mutex_lock(&os->os_lock);
> ocfs2_queue_orphan_scan(osb);
> if (atomic_read(&os->os_state) == ORPHAN_SCAN_ACTIVE)
> - schedule_delayed_work(&os->os_orphan_scan_work,
> + queue_delayed_work(ocfs2_wq, &os->os_orphan_scan_work,
> ocfs2_orphan_scan_timeout());
> mutex_unlock(&os->os_lock);
> }
> @@ -1978,8 +1978,8 @@ void ocfs2_orphan_scan_start(struct ocfs2_super *osb)
> atomic_set(&os->os_state, ORPHAN_SCAN_INACTIVE);
> else {
> atomic_set(&os->os_state, ORPHAN_SCAN_ACTIVE);
> - schedule_delayed_work(&os->os_orphan_scan_work,
> - ocfs2_orphan_scan_timeout());
> + queue_delayed_work(ocfs2_wq, &os->os_orphan_scan_work,
> + ocfs2_orphan_scan_timeout());
> }
> }
>
> --
> 1.5.5
>
>
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq.
2010-05-28 9:12 ` Wengang Wang
@ 2010-05-28 9:21 ` Wengang Wang
0 siblings, 0 replies; 5+ messages in thread
From: Wengang Wang @ 2010-05-28 9:21 UTC (permalink / raw)
To: ocfs2-devel
Sorry, I made a mistake.
The quotum worker is on the system work queue, so the ocfs2_truncate_log_worker
will not block it.
regards,
wengang.
On 10-05-28 17:12, Wengang Wang wrote:
> Hi Tao,
>
> Checking the workers in ocfs2_wq, I found osb_truncate_log_wq. The worker
> function ocfs2_truncate_log_worker() can with the following call trace.
>
> ocfs2_truncate_log_worker()
> ocfs2_flush_truncate_log()
> __ocfs2_flush_truncate_log()
> ocfs2_inode_lock()
>
> So I think during ocfs2_inode_lock(), we still have the possibility that
> it hang the work queue at dlm_wait_for_node_death().
>
> So maybe a dedicated work queue is the only choice?
>
> regards,
> wengang.
> On 10-05-28 14:22, Tao Ma wrote:
> > We used to let orphan scan work in the default work queue,
> > but there is a corner case which will make the system deadlock.
> > The scenario is like this:
> > 1. set heartbeat threadshold to 200. this will allow us to have a
> > great chance to have a orphan scan work before our quorum decision.
> > 2. mount node 1.
> > 3. after 1~2 minutes, mount node 2(in order to make the bug easier
> > to reproduce, better add maxcpus=1 to kernel command line).
> > 4. node 1 do orphan scan work.
> > 5. node 2 do orphan scan work.
> > 6. node 1 do orphan scan work. After this, node 1 hold the orphan scan
> > lock while node 2 know node 1 is the master.
> > 7. ifdown eth2 in node 2(eth2 is what we do ocfs2 interconnection).
> >
> > Now when node 2 begins orphan scan, the system queue is blocked.
> >
> > The root cause is that both orphan scan work and quorum decision work
> > will use the system event work queue. orphan scan has a chance of
> > blocking the event work queue(in dlm_wait_for_node_death) so that there
> > is no chance for quorum decision work to proceed.
> >
> > This patch resolve it by moving orphan scan work to ocfs2_wq.
> >
> > Signed-off-by: Tao Ma <tao.ma@oracle.com>
> > ---
> > fs/ocfs2/journal.c | 6 +++---
> > 1 files changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
> > index 57e3fef..e02788f 100644
> > --- a/fs/ocfs2/journal.c
> > +++ b/fs/ocfs2/journal.c
> > @@ -1938,7 +1938,7 @@ void ocfs2_orphan_scan_work(struct work_struct *work)
> > mutex_lock(&os->os_lock);
> > ocfs2_queue_orphan_scan(osb);
> > if (atomic_read(&os->os_state) == ORPHAN_SCAN_ACTIVE)
> > - schedule_delayed_work(&os->os_orphan_scan_work,
> > + queue_delayed_work(ocfs2_wq, &os->os_orphan_scan_work,
> > ocfs2_orphan_scan_timeout());
> > mutex_unlock(&os->os_lock);
> > }
> > @@ -1978,8 +1978,8 @@ void ocfs2_orphan_scan_start(struct ocfs2_super *osb)
> > atomic_set(&os->os_state, ORPHAN_SCAN_INACTIVE);
> > else {
> > atomic_set(&os->os_state, ORPHAN_SCAN_ACTIVE);
> > - schedule_delayed_work(&os->os_orphan_scan_work,
> > - ocfs2_orphan_scan_timeout());
> > + queue_delayed_work(ocfs2_wq, &os->os_orphan_scan_work,
> > + ocfs2_orphan_scan_timeout());
> > }
> > }
> >
> > --
> > 1.5.5
> >
> >
> > _______________________________________________
> > Ocfs2-devel mailing list
> > Ocfs2-devel at oss.oracle.com
> > http://oss.oracle.com/mailman/listinfo/ocfs2-devel
>
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq.
2010-05-28 6:22 [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq Tao Ma
2010-05-28 9:12 ` Wengang Wang
@ 2010-06-01 22:07 ` Sunil Mushran
2010-06-15 23:47 ` Joel Becker
2 siblings, 0 replies; 5+ messages in thread
From: Sunil Mushran @ 2010-06-01 22:07 UTC (permalink / raw)
To: ocfs2-devel
Signed-off-by: Sunil Mushran<sunil.mushran@oracle.com>
On 05/27/2010 11:22 PM, Tao Ma wrote:
> We used to let orphan scan work in the default work queue,
> but there is a corner case which will make the system deadlock.
> The scenario is like this:
> 1. set heartbeat threadshold to 200. this will allow us to have a
> great chance to have a orphan scan work before our quorum decision.
> 2. mount node 1.
> 3. after 1~2 minutes, mount node 2(in order to make the bug easier
> to reproduce, better add maxcpus=1 to kernel command line).
> 4. node 1 do orphan scan work.
> 5. node 2 do orphan scan work.
> 6. node 1 do orphan scan work. After this, node 1 hold the orphan scan
> lock while node 2 know node 1 is the master.
> 7. ifdown eth2 in node 2(eth2 is what we do ocfs2 interconnection).
>
> Now when node 2 begins orphan scan, the system queue is blocked.
>
> The root cause is that both orphan scan work and quorum decision work
> will use the system event work queue. orphan scan has a chance of
> blocking the event work queue(in dlm_wait_for_node_death) so that there
> is no chance for quorum decision work to proceed.
>
> This patch resolve it by moving orphan scan work to ocfs2_wq.
>
> Signed-off-by: Tao Ma<tao.ma@oracle.com>
> ---
> fs/ocfs2/journal.c | 6 +++---
> 1 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
> index 57e3fef..e02788f 100644
> --- a/fs/ocfs2/journal.c
> +++ b/fs/ocfs2/journal.c
> @@ -1938,7 +1938,7 @@ void ocfs2_orphan_scan_work(struct work_struct *work)
> mutex_lock(&os->os_lock);
> ocfs2_queue_orphan_scan(osb);
> if (atomic_read(&os->os_state) == ORPHAN_SCAN_ACTIVE)
> - schedule_delayed_work(&os->os_orphan_scan_work,
> + queue_delayed_work(ocfs2_wq,&os->os_orphan_scan_work,
> ocfs2_orphan_scan_timeout());
> mutex_unlock(&os->os_lock);
> }
> @@ -1978,8 +1978,8 @@ void ocfs2_orphan_scan_start(struct ocfs2_super *osb)
> atomic_set(&os->os_state, ORPHAN_SCAN_INACTIVE);
> else {
> atomic_set(&os->os_state, ORPHAN_SCAN_ACTIVE);
> - schedule_delayed_work(&os->os_orphan_scan_work,
> - ocfs2_orphan_scan_timeout());
> + queue_delayed_work(ocfs2_wq,&os->os_orphan_scan_work,
> + ocfs2_orphan_scan_timeout());
> }
> }
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq.
2010-05-28 6:22 [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq Tao Ma
2010-05-28 9:12 ` Wengang Wang
2010-06-01 22:07 ` Sunil Mushran
@ 2010-06-15 23:47 ` Joel Becker
2 siblings, 0 replies; 5+ messages in thread
From: Joel Becker @ 2010-06-15 23:47 UTC (permalink / raw)
To: ocfs2-devel
On Fri, May 28, 2010 at 02:22:59PM +0800, Tao Ma wrote:
> We used to let orphan scan work in the default work queue,
> but there is a corner case which will make the system deadlock.
> The scenario is like this:
> 1. set heartbeat threadshold to 200. this will allow us to have a
> great chance to have a orphan scan work before our quorum decision.
> 2. mount node 1.
> 3. after 1~2 minutes, mount node 2(in order to make the bug easier
> to reproduce, better add maxcpus=1 to kernel command line).
> 4. node 1 do orphan scan work.
> 5. node 2 do orphan scan work.
> 6. node 1 do orphan scan work. After this, node 1 hold the orphan scan
> lock while node 2 know node 1 is the master.
> 7. ifdown eth2 in node 2(eth2 is what we do ocfs2 interconnection).
>
> Now when node 2 begins orphan scan, the system queue is blocked.
>
> The root cause is that both orphan scan work and quorum decision work
> will use the system event work queue. orphan scan has a chance of
> blocking the event work queue(in dlm_wait_for_node_death) so that there
> is no chance for quorum decision work to proceed.
>
> This patch resolve it by moving orphan scan work to ocfs2_wq.
>
> Signed-off-by: Tao Ma <tao.ma@oracle.com>
This patch is now in the 'fixes' branch of ocfs2.git
Joel
--
"The one important thing i have learned over the years is the
difference between taking one's work seriously and taking one's self
seriously. The first is imperative and the second is disastrous."
-Margot Fonteyn
Joel Becker
Principal Software Developer
Oracle
E-mail: joel.becker at oracle.com
Phone: (650) 506-8127
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2010-06-15 23:47 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-28 6:22 [Ocfs2-devel] [PATCH] ocfs2: Move orphan scan work to ocfs2_wq Tao Ma
2010-05-28 9:12 ` Wengang Wang
2010-05-28 9:21 ` Wengang Wang
2010-06-01 22:07 ` Sunil Mushran
2010-06-15 23:47 ` Joel Becker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).