From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: Oleg Nesterov <oleg@redhat.com>
Cc: rostedt@goodmis.org, tglx@linutronix.de, peterz@infradead.org,
paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au,
mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org,
vincent.guittot@linaro.org, tj@kernel.org, sbw@mit.edu,
amit.kucheria@linaro.org, rjw@sisk.pl,
wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com,
nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v3 1/9] CPU hotplug: Provide APIs to prevent CPU offline from atomic context
Date: Mon, 10 Dec 2012 10:49:53 +0530 [thread overview]
Message-ID: <50C570F9.2020801@linux.vnet.ibm.com> (raw)
In-Reply-To: <20121209205733.GA7038@redhat.com>
On 12/10/2012 02:27 AM, Oleg Nesterov wrote:
> On 12/07, Srivatsa S. Bhat wrote:
>>
>> 4. No deadlock possibilities
>>
>> Per-cpu locking is not the way to go if we want to have relaxed rules
>> for lock-ordering. Because, we can end up in circular-locking dependencies
>> as explained in https://lkml.org/lkml/2012/12/6/290
>
> OK, but this assumes that, contrary to what Steven said, read-write-read
> deadlock is not possible when it comes to rwlock_t.
What I meant is, with a single (global) rwlock, you can't deadlock like that.
But if you used per-cpu rwlocks and if we don't implement them properly, then we
can end up in circular locking dependencies like shown above.
That is, if you take the same example and replace the lock with global
rwlock, you won't deadlock:
Readers:
CPU 0 CPU 1
------ ------
1. spin_lock(&random_lock); read_lock(&my_rwlock);
2. read_lock(&my_rwlock); spin_lock(&random_lock);
Writer:
CPU 2:
------
write_lock(&my_rwlock);
Even if the writer does a write_lock() in-between steps 1 and 2 at the reader-
side, nothing will happen. The writer would spin because CPU 1 already got
the rwlock for read, and hence both CPU 0 and 1 go ahead. When they finish,
the writer will get the lock and proceed. So, no deadlocks here.
So, what I was pointing out here was, if somebody replaced this global
rwlock with a "straight-forward" implementation of per-cpu rwlocks, he'll
immediately end up in circular locking dependency deadlock between the 3
entities as explained in the link above.
Let me know if my assumptions are incorrect!
> So far I think this
> is true and we can't deadlock. Steven?
>
> However. If this is true, then compared to preempt_disable/stop_machine
> livelock is possible. Probably this is fine, we have the same problem with
> get_online_cpus(). But if we can accept this fact I feel we can simmplify
> this somehow... Can't prove, only feel ;)
>
Not sure I follow..
Anyway, my point is that, we _can't_ implement per-cpu rwlocks like lglocks
and expect it to work in this case. IOW, we can't do:
Reader-side:
-> read_lock() your per-cpu rwlock and proceed.
Writer-side:
-> for_each_online_cpu(cpu)
write_lock(per-cpu rwlock of 'cpu');
Also, like Tejun said, one of the important measures for per-cpu rwlocks
should be that, if a user replaces global rwlocks with percpu rwlocks (for
performance reasons), he shouldn't suddenly end up in numerous deadlock
possibilities which never existed before. The replacement should continue to
remain safe, and perhaps improve the performance.
Regards,
Srivatsa S. Bhat
next prev parent reply other threads:[~2012-12-10 5:21 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-07 17:37 [RFC PATCH v3 0/9] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2012-12-07 17:38 ` [RFC PATCH v3 1/9] CPU hotplug: Provide APIs to prevent CPU offline from atomic context Srivatsa S. Bhat
2012-12-07 17:57 ` Tejun Heo
2012-12-07 18:16 ` Tejun Heo
2012-12-07 18:33 ` Srivatsa S. Bhat
2012-12-07 18:24 ` Srivatsa S. Bhat
2012-12-07 18:31 ` Tejun Heo
2012-12-07 18:38 ` Srivatsa S. Bhat
2012-12-09 19:14 ` Oleg Nesterov
2012-12-09 19:50 ` Srivatsa S. Bhat
2012-12-09 20:22 ` Oleg Nesterov
2012-12-10 4:28 ` Srivatsa S. Bhat
2012-12-10 17:24 ` Oleg Nesterov
2012-12-11 13:13 ` Srivatsa S. Bhat
2012-12-11 13:47 ` Tejun Heo
2012-12-11 14:02 ` Srivatsa S. Bhat
2012-12-11 14:07 ` Tejun Heo
2012-12-11 16:28 ` Srivatsa S. Bhat
2012-12-09 21:13 ` Oleg Nesterov
2012-12-10 5:01 ` Srivatsa S. Bhat
2012-12-10 17:28 ` Oleg Nesterov
2012-12-11 13:05 ` Srivatsa S. Bhat
2012-12-09 20:57 ` Oleg Nesterov
2012-12-10 5:19 ` Srivatsa S. Bhat [this message]
2012-12-10 18:15 ` Oleg Nesterov
2012-12-11 13:04 ` Srivatsa S. Bhat
2012-12-07 17:38 ` [RFC PATCH v3 2/9] CPU hotplug: Convert preprocessor macros to static inline functions Srivatsa S. Bhat
2012-12-07 17:38 ` [RFC PATCH v3 3/9] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly Srivatsa S. Bhat
2012-12-07 17:39 ` [RFC PATCH v3 4/9] smp, cpu hotplug: Fix on_each_cpu_*() " Srivatsa S. Bhat
2012-12-07 17:39 ` [RFC PATCH v3 5/9] sched, cpu hotplug: Use stable online cpus in try_to_wake_up() & select_task_rq() Srivatsa S. Bhat
2012-12-07 17:39 ` [RFC PATCH v3 6/9] kick_process(), cpu-hotplug: Prevent offlining of target CPU properly Srivatsa S. Bhat
2012-12-07 17:39 ` [RFC PATCH v3 7/9] yield_to(), cpu-hotplug: Prevent offlining of other CPUs properly Srivatsa S. Bhat
2012-12-09 19:48 ` Oleg Nesterov
2012-12-09 19:57 ` Srivatsa S. Bhat
2012-12-09 20:40 ` Oleg Nesterov
2012-12-10 4:04 ` Srivatsa S. Bhat
2012-12-07 17:40 ` [RFC PATCH v3 8/9] kvm, vmx: Add atomic synchronization with CPU Hotplug Srivatsa S. Bhat
2012-12-07 17:40 ` [RFC PATCH v3 9/9] cpu: No more __stop_machine() in _cpu_down() Srivatsa S. Bhat
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50C570F9.2020801@linux.vnet.ibm.com \
--to=srivatsa.bhat@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=amit.kucheria@linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=nikunj@linux.vnet.ibm.com \
--cc=oleg@redhat.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=rjw@sisk.pl \
--cc=rostedt@goodmis.org \
--cc=rusty@rustcorp.com.au \
--cc=sbw@mit.edu \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=wangyun@linux.vnet.ibm.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).