From: Arnd Bergmann <arnd@arndb.de>
To: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@kernel.org>
Cc: linaro-kernel@lists.linaro.org,
"Rafael J. Wysocki" <rafael@kernel.org>,
Mark Brown <broonie@kernel.org>,
"Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
kernel-build-reports@lists.linaro.org,
Viresh Kumar <viresh.kumar@linaro.org>,
"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
linux-next@vger.kernel.org,
Frederic Weisbecker <fweisbec@gmail.com>,
Thomas Gleixner <tglx@linutronix.de>,
linux-kernel@vger.kernel.org
Subject: [PATCH] irq_work: unhide irq_work_queue_on declaration on non-SMP
Date: Wed, 10 Feb 2016 16:07:20 +0100 [thread overview]
Message-ID: <4447865.IoQjlk8ngP@wuerfel> (raw)
In-Reply-To: <CAJZ5v0iamr-b5h=eS1SWo6WMj_NNOqnXoxnC6R03VmtLBG5dEA@mail.gmail.com>
The cpufreq code uses 'if (IS_ENABLED(CONFIG_SMP))' to check
whether it should queue a task on the local CPU or a remote
one, however the irq_work_queue_on() function is not declared
when CONFIG_SMP is not set:
drivers/cpufreq/cpufreq_governor.c: In function 'gov_queue_irq_work':
drivers/cpufreq/cpufreq_governor.c:251:3: error: implicit declaration of function 'irq_work_queue_on' [-Werror=implicit-function-declaration]
irq_work_queue_on(&policy_dbs->irq_work, smp_processor_id());
This changes the conditional declaration so that irq_work_queue_on
just queues the irq work on the only available CPU when CONFIG_SMP
is not set, which is presumably what most people need anyway.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 0144fa03ef46 ("cpufreq: governor: Replace timers with utilization update callbacks")
diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 47b9ebd4a74f..c9bde50ef317 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -33,9 +33,13 @@ void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *))
#define DEFINE_IRQ_WORK(name, _f) struct irq_work name = { .func = (_f), }
bool irq_work_queue(struct irq_work *work);
-
#ifdef CONFIG_SMP
bool irq_work_queue_on(struct irq_work *work, int cpu);
+#else
+static inline bool irq_work_queue_on(struct irq_work *work, int cpu)
+{
+ return irq_work_queue(work);
+}
#endif
void irq_work_tick(void);
next prev parent reply other threads:[~2016-02-10 15:08 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <E1aTQoy-0005cH-Tr@optimist>
2016-02-10 9:52 ` next-20160210 build: 2 failures 4 warnings (next-20160210) Mark Brown
2016-02-10 14:27 ` Rafael J. Wysocki
2016-02-10 15:04 ` Arnd Bergmann
2016-02-10 15:07 ` Arnd Bergmann [this message]
2016-02-10 15:27 ` [PATCH] irq_work: unhide irq_work_queue_on declaration on non-SMP Rafael J. Wysocki
2016-02-10 18:10 ` Mark Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4447865.IoQjlk8ngP@wuerfel \
--to=arnd@arndb.de \
--cc=broonie@kernel.org \
--cc=ego@linux.vnet.ibm.com \
--cc=fweisbec@gmail.com \
--cc=kernel-build-reports@lists.linaro.org \
--cc=linaro-kernel@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-next@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=rafael@kernel.org \
--cc=tglx@linutronix.de \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox