From: 21cnbao@gmail.com (Barry Song)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] arm:pm: save the vfp state of last scheduled-out proceed while suspending
Date: Wed, 4 May 2011 13:28:59 +0800 [thread overview]
Message-ID: <BANLkTim_f2rjt8mzzeLvmMMin=ef5pYqOg@mail.gmail.com> (raw)
In-Reply-To: <1304418825.19196.67.camel@e102109-lin.cambridge.arm.com>
Hi Catalin,
2011/5/3 Catalin Marinas <catalin.marinas@arm.com>:
> On Fri, 2011-04-29 at 04:26 +0100, Barry Song wrote:
>> 2011/4/29 Barry Song <21cnbao@gmail.com>:
>> > Seems like Rongjun's codes have handled Russel's last change but in
>> > the "else". Russel handles it before the "if (fpexc & FPEXC_EN) {".
>> > We will take over the test in
>> > http://thread.gmane.org/gmane.linux.kernel/1099558 and continue to
>> > send patch v3.
>>
>> but we don't think Russel's last change is complelety right by:
>> /* If lazy disable, re-enable the VFP ready for it to be saved */
>> ? ? ? ? if (last_VFP_context[ti->cpu] != &ti->vfpstate) {
>> ? ? ? ? ? ? ? ? fpexc |= FPEXC_EN;
>> ? ? ? ? ? ? ? ? fmxr(FPEXC, fpexc);
>> ? ? ? ? }
>> ? ? ? ? /* If VFP is on, then save state for resumption */
>> ? ? ? ? if (fpexc & FPEXC_EN) {
>> ? ? ? ? ? ? ? ? ...
>>
>> there are still risk. ?For example, if process p1/p2 switch like this:
>> P1: ? ? use vfp
>> ? ? ? ?swith to -> P2: don't use vfp
>> ? ? ? ? ? ? ? ? ? ? switch to ?-> P1(use vfp), but it didn't begin to
>> use vfp, then ?FPEXC_EN is not set, but suspend happen at that moment
>>
>> At the last time, last_VFP_context[ti->cpu] will be &ti->vfpstate,
>> fpexc & FPEXC_EN will be false. it loses the chance to save status.
>
> I think your scenario is possible on UP systems. On SMP we save the VFP
> context during thread switching anyway.
>
>> So looks like Rongjun's codes can avoid this kind of risk:
>> ? ? ? ?/* if vfp is on, then save state for resumption */
>> ? ? ? ?if (fpexc & FPEXC_EN) {
>> @@ -392,8 +393,16 @@ static int vfp_pm_suspend(struct sys_device *dev, pm_message_t state)
>>
>> ? ? ? ? ? ? ? ?/* disable, just in case */
>> ? ? ? ? ? ? ? ?fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN);
>> + ? ? ? } else {
>> + ? ? ? ? ? ? ? if (last_VFP_context[cpu]) {
>> + ? ? ? ? ? ? ? ? ? ? ? fmxr(FPEXC, fpexc | FPEXC_EN);
>> + ? ? ? ? ? ? ? ? ? ? ? vfp_save_state(last_VFP_context[cpu], fpexc);
>> + ? ? ? ? ? ? ? ? ? ? ? fmxr(FPEXC, fpexc);
>> + ? ? ? ? ? ? ? }
>> ? ? ? ?}
>
> I think we need to set last_VFP_context[cpu] to NULL as well so that the
> context is reloaded.
The original codes have the below:
/* clear any information we had about last context state */
memset(last_VFP_context, 0, sizeof(last_VFP_context));
>
> --
> Catalin
>
>
>
Thanks
Barry
prev parent reply other threads:[~2011-05-04 5:28 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-27 6:30 [PATCH] arm:pm: save the vfp state of last scheduled-out proceed while suspending Barry Song
2011-04-28 16:44 ` Catalin Marinas
2011-04-29 3:08 ` Barry Song
2011-04-29 3:26 ` Barry Song
2011-05-03 10:33 ` Catalin Marinas
2011-05-04 5:28 ` Barry Song [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='BANLkTim_f2rjt8mzzeLvmMMin=ef5pYqOg@mail.gmail.com' \
--to=21cnbao@gmail.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).