From: Bryan O'Donoghue <pure.logic@nexus-software.ie>
To: Lukas Wunner <lukas@wunner.de>
Cc: Ingo Molnar <mingo@kernel.org>,
x86@kernel.org,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Bjorn Helgaas <bhelgaas@google.com>,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 1/1] x86/platform/intel-mid: Retrofit pci_platform_pm_ops ->get_state hook
Date: Sun, 23 Oct 2016 17:25:52 +0100 [thread overview]
Message-ID: <1477239952.25812.11.camel@nexus-software.ie> (raw)
In-Reply-To: <20161023145757.GA4909@wunner.de>
On Sun, 2016-10-23 at 16:57 +0200, Lukas Wunner wrote:
> On Sun, Oct 23, 2016 at 01:37:55PM +0100, Bryan O'Donoghue wrote:
> >
> The usage of a mutex in mid_pwr_set_power_state() actually seems
> questionable since this is called with interrupts disabled:
> pci_pm_resume_noirq
> pci_pm_default_resume_early
> pci_power_up
> platform_pci_set_power_state
> mid_pci_set_power_state
> intel_mid_pci_set_power_state
> mid_pwr_set_power_state
That was my other question then - though I assume the mutex is put in
place to future-proof the code.
I'm just wondering out loud - considering we have the case where we update a register and then spin waiting for a command completion - is it in fact logically valid to have a concurrent reader read out the power state - when another writer is executing mid_pwr_wait() - for example.
/* Wait 500ms that the latest PWRMU command finished */
static int mid_pwr_wait(struct mid_pwr *pwr)
{
unsigned int count = 500000;
bool busy;
do {
busy = mid_pwr_is_busy(pwr);
if (!busy)
return 0;
udelay(1);
} while (--count);
return -EBUSY;
}
static int mid_pwr_wait_for_cmd(struct mid_pwr *pwr, u8 cmd)
{
writel(PM_CMD_CMD(cmd) | PM_CMD_CM_IMMEDIATE, pwr->regs +
PM_CMD);
return mid_pwr_wait(pwr);
}
static int __update_power_state(struct mid_pwr *pwr, int reg, int bit,
int new)
{
<snip>
/* Update the power state */
mid_pwr_set_state(pwr, reg, (power & ~(3 << bit)) | (new <<
bit));
/* Send command to SCU */
ret = mid_pwr_wait_for_cmd(pwr, CMD_SET_CFG);
if (ret)
return ret;
<snip>
}
anyway...
I've tested your patch and it looks good. We can otherwise defer to
andy on the usage of the mutex.
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
---
bod
next prev parent reply other threads:[~2016-10-23 16:25 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-23 11:55 [PATCH v3 0/1] x86/platform/intel-mid: Retrofit pci_platform_pm_ops Lukas Wunner
2016-10-23 11:55 ` [PATCH v3 1/1] x86/platform/intel-mid: Retrofit pci_platform_pm_ops ->get_state hook Lukas Wunner
2016-10-23 12:37 ` Bryan O'Donoghue
2016-10-23 14:57 ` Lukas Wunner
2016-10-23 16:25 ` Bryan O'Donoghue [this message]
2016-10-24 9:15 ` Andy Shevchenko
2016-10-24 10:09 ` Lukas Wunner
2016-10-24 11:05 ` Andy Shevchenko
2016-10-25 6:19 ` Lukas Wunner
2016-10-26 14:06 ` Bryan O'Donoghue
2016-10-26 15:01 ` Andy Shevchenko
2016-11-06 13:43 ` Thorsten Leemhuis
2016-11-06 17:12 ` Lukas Wunner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1477239952.25812.11.camel@nexus-software.ie \
--to=pure.logic@nexus-software.ie \
--cc=andriy.shevchenko@linux.intel.com \
--cc=bhelgaas@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lukas@wunner.de \
--cc=mingo@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).