From: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
To: rafael.j.wysocki@intel.com, viresh.kumar@linaro.org
Cc: linux-pm@vger.kernel.org,
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Subject: [PATCH 1/2] cpufreq: acpi_cpufreq: prevent crash on reading freqdomain_cpus
Date: Thu, 1 Oct 2015 15:23:01 -0700 [thread overview]
Message-ID: <1443738182-4077-1-git-send-email-srinivas.pandruvada@linux.intel.com> (raw)
When freqdomain_cpus attribute is read from an offlined cpu, it will
cause crash. This change prevents calling cpufreq_show_cpus when
mask is NULL. Alternatively we can add this NULL check in
cpufreq_show_cpus function.
Crash info:
[ 170.814949] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
[ 170.814990] IP: [<ffffffff813b2490>] _find_next_bit.part.0+0x10/0x70
[ 170.815021] PGD 227d30067 PUD 229e56067 PMD 0
[ 170.815043] Oops: 0000 [#2] SMP
[ 170.816022] CPU: 3 PID: 3121 Comm: cat Tainted: G D OE 4.3.0-rc3+ #33
...
...
[ 170.816657] Call Trace:
[ 170.816672] [<ffffffff813b2505>] ? find_next_bit+0x15/0x20
[ 170.816696] [<ffffffff8160e47c>] cpufreq_show_cpus+0x5c/0xd0
[ 170.816722] [<ffffffffa031a409>] show_freqdomain_cpus+0x19/0x20 [acpi_cpufreq]
[ 170.816749] [<ffffffff8160e65b>] show+0x3b/0x60
[ 170.816769] [<ffffffff8129b31c>] sysfs_kf_seq_show+0xbc/0x130
[ 170.816793] [<ffffffff81299be3>] kernfs_seq_show+0x23/0x30
[ 170.816816] [<ffffffff81240f2c>] seq_read+0xec/0x390
[ 170.816837] [<ffffffff8129a64a>] kernfs_fop_read+0x10a/0x160
[ 170.816861] [<ffffffff8121d9b7>] __vfs_read+0x37/0x100
[ 170.816883] [<ffffffff813217c0>] ? security_file_permission+0xa0/0xc0
[ 170.816909] [<ffffffff8121e2e3>] vfs_read+0x83/0x130
[ 170.816930] [<ffffffff8121f035>] SyS_read+0x55/0xc0
...
...
[ 170.817185] ---[ end trace bc6eadf82b2b965a ]---
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
---
drivers/cpufreq/acpi-cpufreq.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
index 7982772..cec1ee2 100644
--- a/drivers/cpufreq/acpi-cpufreq.c
+++ b/drivers/cpufreq/acpi-cpufreq.c
@@ -149,6 +149,9 @@ static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf)
{
struct acpi_cpufreq_data *data = policy->driver_data;
+ if (unlikely(!data))
+ return -ENODEV;
+
return cpufreq_show_cpus(data->freqdomain_cpus, buf);
}
--
2.4.3
next reply other threads:[~2015-10-01 22:23 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-01 22:23 Srinivas Pandruvada [this message]
2015-10-01 22:23 ` [PATCH 2/2] cpufreq: prevent lockup on reading scaling_available_frequencies Srinivas Pandruvada
2015-10-07 12:45 ` Viresh Kumar
2015-10-07 16:18 ` Srinivas Pandruvada
2015-10-07 17:03 ` Viresh Kumar
2015-10-07 18:05 ` Srinivas Pandruvada
2015-10-07 18:32 ` Viresh Kumar
2015-10-07 12:19 ` [PATCH 1/2] cpufreq: acpi_cpufreq: prevent crash on reading freqdomain_cpus Viresh Kumar
2015-10-07 15:34 ` Srinivas Pandruvada
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1443738182-4077-1-git-send-email-srinivas.pandruvada@linux.intel.com \
--to=srinivas.pandruvada@linux.intel.com \
--cc=linux-pm@vger.kernel.org \
--cc=rafael.j.wysocki@intel.com \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).