* [PATCH 0/3] static keys: fix test/set races
@ 2013-06-28 22:30 jbaron
2013-06-28 22:30 ` [PATCH 1/3] static_keys: Add a static_key_slow_set_true()/false() interface jbaron
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: jbaron @ 2013-06-28 22:30 UTC (permalink / raw)
To: rostedt, andi; +Cc: linux-kernel, mingo, peterz
Hi,
As pointed out by Andi Kleen, some static key users can be racy because they
check the value of the key->enabled, and then subsequently update the branch
direction. A number of call sites have 'higher' level locking that avoids this
race, but the usage in the scheduler features does not. See:
http://lkml.indiana.edu/hypermail/linux/kernel/1304.2/01655.html
Thus, introduce a new API that does the check and set under the
'jump_label_mutex'. This should also allow to simplify call sites a bit.
Users of static keys should use either the inc/dec or the set_true/set_false
API.
Thanks,
-Jason
Jason Baron (3):
static_keys: Add a static_key_slow_set_true()/false() interface
sched: fix static keys race in sched_feat
udp: make use of static_key_slow_set_true() interface
Documentation/static-keys.txt | 8 ++++++++
include/linux/jump_label.h | 30 ++++++++++++++++++++++++++++++
kernel/jump_label.c | 40 ++++++++++++++++++++++++++++++++++++++++
kernel/sched/core.c | 12 +++++-------
kernel/sched/sched.h | 10 +++++-----
net/ipv4/udp.c | 9 ++++-----
net/ipv6/udp.c | 9 ++++-----
7 files changed, 96 insertions(+), 22 deletions(-)
--
1.7.9.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 1/3] static_keys: Add a static_key_slow_set_true()/false() interface
2013-06-28 22:30 [PATCH 0/3] static keys: fix test/set races jbaron
@ 2013-06-28 22:30 ` jbaron
2013-06-29 3:00 ` Steven Rostedt
2013-06-28 22:30 ` [PATCH 2/3] sched: fix static keys race in sched_feat jbaron
` (3 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: jbaron @ 2013-06-28 22:30 UTC (permalink / raw)
To: rostedt, andi; +Cc: linux-kernel, mingo, peterz
As pointed out by Andi Kleen, some static key users can be racy because they
check the value of the key->enabled, and then subsequently update the branch
direction. A number of call sites have 'higher' level locking that avoids this
race, but the usage in the scheduler features does not. See:
http://lkml.indiana.edu/hypermail/linux/kernel/1304.2/01655.html
Thus, introduce a new API that does the check and set under the
'jump_label_mutex'. This should also allow to simplify call sites a bit.
Users of static keys should use either the inc/dec or the set_true/set_false
API.
Signed-off-by: Jason Baron <jbaron@akamai.com>
---
Documentation/static-keys.txt | 8 ++++++++
include/linux/jump_label.h | 30 ++++++++++++++++++++++++++++++
kernel/jump_label.c | 40 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 78 insertions(+)
diff --git a/Documentation/static-keys.txt b/Documentation/static-keys.txt
index 9f5263d..4cca941 100644
--- a/Documentation/static-keys.txt
+++ b/Documentation/static-keys.txt
@@ -134,6 +134,14 @@ An example usage in the kernel is the implementation of tracepoints:
TP_CONDITION(cond)); \
}
+Keys can also be updated with the following calls:
+
+ static_key_slow_set_true(struct static_key *key);
+ static_key_slow_set_false(struct static_key *key);
+
+Users should of the API should not mix the inc/dec with usages
+of set_true/set_false. That is, users should choose one or the other.
+
Tracepoints are disabled by default, and can be placed in performance critical
pieces of the kernel. Thus, by using a static key, the tracepoints can have
absolutely minimal impact when not in use.
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 0976fc4..787ab73 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -67,6 +67,11 @@ struct static_key_deferred {
struct delayed_work work;
};
+/* for use with set_true/set_false API */
+struct static_key_boolean {
+ struct static_key key;
+};
+
# include <asm/jump_label.h>
# define HAVE_JUMP_LABEL
#endif /* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
@@ -120,6 +125,8 @@ extern int jump_label_text_reserved(void *start, void *end);
extern void static_key_slow_inc(struct static_key *key);
extern void static_key_slow_dec(struct static_key *key);
extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
+extern void static_key_slow_set_true(struct static_key_boolean *key);
+extern void static_key_slow_set_false(struct static_key_boolean *key);
extern void jump_label_apply_nops(struct module *mod);
extern void
jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
@@ -128,6 +135,10 @@ jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
#define STATIC_KEY_INIT_FALSE ((struct static_key) \
{ .enabled = ATOMIC_INIT(0), .entries = (void *)0 })
+#define STATIC_KEY_BOOLEAN_INIT_TRUE ((struct static_key_boolean) \
+ { .key.enabled = ATOMIC_INIT(1), .key.entries = (void *)1 })
+#define STATIC_KEY_BOOLEAN_INIT_FALSE ((struct static_key_boolean) \
+ { .key.enabled = ATOMIC_INIT(0), .key.entries = (void *)0 })
#else /* !HAVE_JUMP_LABEL */
@@ -137,6 +148,11 @@ struct static_key {
atomic_t enabled;
};
+/* for use with set_true/set_false API */
+struct static_key_boolean {
+ struct static_key key;
+};
+
static __always_inline void jump_label_init(void)
{
}
@@ -174,6 +190,16 @@ static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
static_key_slow_dec(&key->key);
}
+static inline void static_key_slow_set_true(struct static_key_boolean *key)
+{
+ atomic_set(&key->key.enabled, 1);
+}
+
+static inline void static_key_slow_set_false(struct static_key_boolean *key)
+{
+ atomic_set(&key->key.enabled, 0);
+}
+
static inline int jump_label_text_reserved(void *start, void *end)
{
return 0;
@@ -197,6 +223,10 @@ jump_label_rate_limit(struct static_key_deferred *key,
{ .enabled = ATOMIC_INIT(1) })
#define STATIC_KEY_INIT_FALSE ((struct static_key) \
{ .enabled = ATOMIC_INIT(0) })
+#define STATIC_KEY_BOOLEAN_INIT_TRUE ((struct static_key_boolean) \
+ { .key.enabled = ATOMIC_INIT(1) })
+#define STATIC_KEY_BOOLEAN_INIT_FALSE ((struct static_key_boolean) \
+ { .key.enabled = ATOMIC_INIT(0) })
#endif /* HAVE_JUMP_LABEL */
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 60f48fa..2234a4c 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -120,6 +120,46 @@ void jump_label_rate_limit(struct static_key_deferred *key,
}
EXPORT_SYMBOL_GPL(jump_label_rate_limit);
+void static_key_slow_set_true(struct static_key_boolean *key_boolean)
+{
+ struct static_key *key = (struct static_key *)key_boolean;
+ int enabled;
+
+ jump_label_lock();
+ enabled = atomic_read(&key->enabled);
+ if (enabled == 1)
+ goto out;
+ WARN(enabled != 0, "%s: key->enabled: %d\n", __func__, enabled);
+ if (!jump_label_get_branch_default(key))
+ jump_label_update(key, JUMP_LABEL_ENABLE);
+ else
+ jump_label_update(key, JUMP_LABEL_DISABLE);
+ atomic_set(&key->enabled, 1);
+out:
+ jump_label_unlock();
+}
+EXPORT_SYMBOL_GPL(static_key_slow_set_true);
+
+void static_key_slow_set_false(struct static_key_boolean *key_boolean)
+{
+ struct static_key *key = (struct static_key *)key_boolean;
+ int enabled;
+
+ jump_label_lock();
+ enabled = atomic_read(&key->enabled);
+ if (enabled == 0)
+ goto out;
+ WARN(enabled != 1, "%s: key->enabled: %d\n", __func__, enabled);
+ if (!jump_label_get_branch_default(key))
+ jump_label_update(key, JUMP_LABEL_DISABLE);
+ else
+ jump_label_update(key, JUMP_LABEL_ENABLE);
+ atomic_set(&key->enabled, 0);
+out:
+ jump_label_unlock();
+}
+EXPORT_SYMBOL_GPL(static_key_slow_set_false);
+
static int addr_conflict(struct jump_entry *entry, void *start, void *end)
{
if (entry->code <= (unsigned long)end &&
--
1.7.9.5
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 2/3] sched: fix static keys race in sched_feat
2013-06-28 22:30 [PATCH 0/3] static keys: fix test/set races jbaron
2013-06-28 22:30 ` [PATCH 1/3] static_keys: Add a static_key_slow_set_true()/false() interface jbaron
@ 2013-06-28 22:30 ` jbaron
2013-06-29 3:03 ` Steven Rostedt
2013-06-28 22:30 ` [PATCH 3/3] udp: make use of static_key_slow_set_true() interface jbaron
` (2 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: jbaron @ 2013-06-28 22:30 UTC (permalink / raw)
To: rostedt, andi; +Cc: linux-kernel, mingo, peterz
As pointed out by Andi Kleen, thue usage of static keys can be racy in
sched_feat_disable vs. sched_feat_enable(). Currently, we first check the
value of keys->enabled, and subsequently update the branch direction. This,
can be racy and can potentially leave the keys in an inconsistent state.
Fix the race by using static_key_slow_set_true()/false(), which does the the
check and set using the jump_label_mutex.
Signed-off-by: Jason Baron <jbaron@akamai.com>
---
kernel/sched/core.c | 12 +++++-------
kernel/sched/sched.h | 10 +++++-----
2 files changed, 10 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e8b3350..2ebf82e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -165,13 +165,13 @@ static int sched_feat_show(struct seq_file *m, void *v)
#ifdef HAVE_JUMP_LABEL
-#define jump_label_key__true STATIC_KEY_INIT_TRUE
-#define jump_label_key__false STATIC_KEY_INIT_FALSE
+#define jump_label_key__true STATIC_KEY_BOOLEAN_INIT_TRUE
+#define jump_label_key__false STATIC_KEY_BOOLEAN_INIT_FALSE
#define SCHED_FEAT(name, enabled) \
jump_label_key__##enabled ,
-struct static_key sched_feat_keys[__SCHED_FEAT_NR] = {
+struct static_key_boolean sched_feat_keys[__SCHED_FEAT_NR] = {
#include "features.h"
};
@@ -179,14 +179,12 @@ struct static_key sched_feat_keys[__SCHED_FEAT_NR] = {
static void sched_feat_disable(int i)
{
- if (static_key_enabled(&sched_feat_keys[i]))
- static_key_slow_dec(&sched_feat_keys[i]);
+ static_key_slow_set_false(&sched_feat_keys[i]);
}
static void sched_feat_enable(int i)
{
- if (!static_key_enabled(&sched_feat_keys[i]))
- static_key_slow_inc(&sched_feat_keys[i]);
+ static_key_slow_set_true(&sched_feat_keys[i]);
}
#else
static void sched_feat_disable(int i) { };
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ce39224..88712dc 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -742,17 +742,17 @@ static __always_inline bool static_branch__false(struct static_key *key)
return static_key_false(key); /* Out of line branch. */
}
-#define SCHED_FEAT(name, enabled) \
-static __always_inline bool static_branch_##name(struct static_key *key) \
-{ \
- return static_branch__##enabled(key); \
+#define SCHED_FEAT(name, enabled) \
+static __always_inline bool static_branch_##name(struct static_key_boolean *bool_key)\
+{ \
+ return static_branch__##enabled(&bool_key->key); \
}
#include "features.h"
#undef SCHED_FEAT
-extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
+extern struct static_key_boolean sched_feat_keys[__SCHED_FEAT_NR];
#define sched_feat(x) (static_branch_##x(&sched_feat_keys[__SCHED_FEAT_##x]))
#else /* !(SCHED_DEBUG && HAVE_JUMP_LABEL) */
#define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
--
1.7.9.5
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 3/3] udp: make use of static_key_slow_set_true() interface
2013-06-28 22:30 [PATCH 0/3] static keys: fix test/set races jbaron
2013-06-28 22:30 ` [PATCH 1/3] static_keys: Add a static_key_slow_set_true()/false() interface jbaron
2013-06-28 22:30 ` [PATCH 2/3] sched: fix static keys race in sched_feat jbaron
@ 2013-06-28 22:30 ` jbaron
2013-06-29 3:13 ` Steven Rostedt
2013-06-29 7:20 ` [PATCH 0/3] static keys: fix test/set races Ingo Molnar
2014-06-24 2:28 ` Steven Rostedt
4 siblings, 1 reply; 17+ messages in thread
From: jbaron @ 2013-06-28 22:30 UTC (permalink / raw)
To: rostedt, andi; +Cc: linux-kernel, netdev, mingo, peterz, davem
Make use of the simpler API.
Signed-off-by: Jason Baron <jbaron@akamai.com>
---
net/ipv4/udp.c | 9 ++++-----
net/ipv6/udp.c | 9 ++++-----
2 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 0bf5d39..b8d0029 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1421,11 +1421,10 @@ static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
}
-static struct static_key udp_encap_needed __read_mostly;
+static struct static_key_boolean udp_encap_needed __read_mostly;
void udp_encap_enable(void)
{
- if (!static_key_enabled(&udp_encap_needed))
- static_key_slow_inc(&udp_encap_needed);
+ static_key_slow_set_true(&udp_encap_needed);
}
EXPORT_SYMBOL(udp_encap_enable);
@@ -1450,7 +1449,7 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
goto drop;
nf_reset(skb);
- if (static_key_false(&udp_encap_needed) && up->encap_type) {
+ if (static_key_false(&udp_encap_needed.key) && up->encap_type) {
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
/*
@@ -1773,7 +1772,7 @@ void udp_destroy_sock(struct sock *sk)
bool slow = lock_sock_fast(sk);
udp_flush_pending_frames(sk);
unlock_sock_fast(sk, slow);
- if (static_key_false(&udp_encap_needed) && up->encap_type) {
+ if (static_key_false(&udp_encap_needed.key) && up->encap_type) {
void (*encap_destroy)(struct sock *sk);
encap_destroy = ACCESS_ONCE(up->encap_destroy);
if (encap_destroy)
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 42923b1..94a4091 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -573,11 +573,10 @@ static __inline__ void udpv6_err(struct sk_buff *skb,
__udp6_lib_err(skb, opt, type, code, offset, info, &udp_table);
}
-static struct static_key udpv6_encap_needed __read_mostly;
+static struct static_key_boolean udpv6_encap_needed __read_mostly;
void udpv6_encap_enable(void)
{
- if (!static_key_enabled(&udpv6_encap_needed))
- static_key_slow_inc(&udpv6_encap_needed);
+ static_key_slow_set_true(&udpv6_encap_needed);
}
EXPORT_SYMBOL(udpv6_encap_enable);
@@ -590,7 +589,7 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
goto drop;
- if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
+ if (static_key_false(&udpv6_encap_needed.key) && up->encap_type) {
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
/*
@@ -1300,7 +1299,7 @@ void udpv6_destroy_sock(struct sock *sk)
udp_v6_flush_pending_frames(sk);
release_sock(sk);
- if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
+ if (static_key_false(&udpv6_encap_needed.key) && up->encap_type) {
void (*encap_destroy)(struct sock *sk);
encap_destroy = ACCESS_ONCE(up->encap_destroy);
if (encap_destroy)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 1/3] static_keys: Add a static_key_slow_set_true()/false() interface
2013-06-28 22:30 ` [PATCH 1/3] static_keys: Add a static_key_slow_set_true()/false() interface jbaron
@ 2013-06-29 3:00 ` Steven Rostedt
0 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2013-06-29 3:00 UTC (permalink / raw)
To: jbaron; +Cc: andi, linux-kernel, mingo, peterz
On Fri, 2013-06-28 at 22:30 +0000, jbaron@akamai.com wrote:
> As pointed out by Andi Kleen, some static key users can be racy because they
> check the value of the key->enabled, and then subsequently update the branch
> direction. A number of call sites have 'higher' level locking that avoids this
> race, but the usage in the scheduler features does not. See:
> http://lkml.indiana.edu/hypermail/linux/kernel/1304.2/01655.html
>
> Thus, introduce a new API that does the check and set under the
> 'jump_label_mutex'. This should also allow to simplify call sites a bit.
>
> Users of static keys should use either the inc/dec or the set_true/set_false
> API.
>
> Signed-off-by: Jason Baron <jbaron@akamai.com>
> ---
> Documentation/static-keys.txt | 8 ++++++++
> include/linux/jump_label.h | 30 ++++++++++++++++++++++++++++++
> kernel/jump_label.c | 40 ++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 78 insertions(+)
>
> diff --git a/Documentation/static-keys.txt b/Documentation/static-keys.txt
> index 9f5263d..4cca941 100644
> --- a/Documentation/static-keys.txt
> +++ b/Documentation/static-keys.txt
> @@ -134,6 +134,14 @@ An example usage in the kernel is the implementation of tracepoints:
> TP_CONDITION(cond)); \
> }
>
> +Keys can also be updated with the following calls:
I would explain this a bit more. Something like:
When dealing with a simple on/off switch, where there's not a usage
counter involved with the static_key, the
static_key_slow_set_true/false() methods should be used.
> +
> + static_key_slow_set_true(struct static_key *key);
> + static_key_slow_set_false(struct static_key *key);
> +
> +Users should of the API should not mix the inc/dec with usages
> +of set_true/set_false. That is, users should choose one or the other.
Fix the above comment.
-- Steve
> +
> Tracepoints are disabled by default, and can be placed in performance critical
> pieces of the kernel. Thus, by using a static key, the tracepoints can have
> absolutely minimal impact when not in use.
> diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
> index 0976fc4..787ab73 100644
> --- a/include/linux/jump_label.h
> +++ b/include/linux/jump_label.h
> @@ -67,6 +67,11 @@ struct static_key_deferred {
> struct delayed_work work;
> };
>
> +/* for use with set_true/set_false API */
> +struct static_key_boolean {
> + struct static_key key;
> +};
> +
> # include <asm/jump_label.h>
> # define HAVE_JUMP_LABEL
> #endif /* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
> @@ -120,6 +125,8 @@ extern int jump_label_text_reserved(void *start, void *end);
> extern void static_key_slow_inc(struct static_key *key);
> extern void static_key_slow_dec(struct static_key *key);
> extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
> +extern void static_key_slow_set_true(struct static_key_boolean *key);
> +extern void static_key_slow_set_false(struct static_key_boolean *key);
> extern void jump_label_apply_nops(struct module *mod);
> extern void
> jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
> @@ -128,6 +135,10 @@ jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
> { .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
> #define STATIC_KEY_INIT_FALSE ((struct static_key) \
> { .enabled = ATOMIC_INIT(0), .entries = (void *)0 })
> +#define STATIC_KEY_BOOLEAN_INIT_TRUE ((struct static_key_boolean) \
> + { .key.enabled = ATOMIC_INIT(1), .key.entries = (void *)1 })
> +#define STATIC_KEY_BOOLEAN_INIT_FALSE ((struct static_key_boolean) \
> + { .key.enabled = ATOMIC_INIT(0), .key.entries = (void *)0 })
>
> #else /* !HAVE_JUMP_LABEL */
>
> @@ -137,6 +148,11 @@ struct static_key {
> atomic_t enabled;
> };
>
> +/* for use with set_true/set_false API */
> +struct static_key_boolean {
> + struct static_key key;
> +};
> +
> static __always_inline void jump_label_init(void)
> {
> }
> @@ -174,6 +190,16 @@ static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
> static_key_slow_dec(&key->key);
> }
>
> +static inline void static_key_slow_set_true(struct static_key_boolean *key)
> +{
> + atomic_set(&key->key.enabled, 1);
> +}
> +
> +static inline void static_key_slow_set_false(struct static_key_boolean *key)
> +{
> + atomic_set(&key->key.enabled, 0);
> +}
> +
> static inline int jump_label_text_reserved(void *start, void *end)
> {
> return 0;
> @@ -197,6 +223,10 @@ jump_label_rate_limit(struct static_key_deferred *key,
> { .enabled = ATOMIC_INIT(1) })
> #define STATIC_KEY_INIT_FALSE ((struct static_key) \
> { .enabled = ATOMIC_INIT(0) })
> +#define STATIC_KEY_BOOLEAN_INIT_TRUE ((struct static_key_boolean) \
> + { .key.enabled = ATOMIC_INIT(1) })
> +#define STATIC_KEY_BOOLEAN_INIT_FALSE ((struct static_key_boolean) \
> + { .key.enabled = ATOMIC_INIT(0) })
>
> #endif /* HAVE_JUMP_LABEL */
>
> diff --git a/kernel/jump_label.c b/kernel/jump_label.c
> index 60f48fa..2234a4c 100644
> --- a/kernel/jump_label.c
> +++ b/kernel/jump_label.c
> @@ -120,6 +120,46 @@ void jump_label_rate_limit(struct static_key_deferred *key,
> }
> EXPORT_SYMBOL_GPL(jump_label_rate_limit);
>
> +void static_key_slow_set_true(struct static_key_boolean *key_boolean)
> +{
> + struct static_key *key = (struct static_key *)key_boolean;
> + int enabled;
> +
> + jump_label_lock();
> + enabled = atomic_read(&key->enabled);
> + if (enabled == 1)
> + goto out;
> + WARN(enabled != 0, "%s: key->enabled: %d\n", __func__, enabled);
> + if (!jump_label_get_branch_default(key))
> + jump_label_update(key, JUMP_LABEL_ENABLE);
> + else
> + jump_label_update(key, JUMP_LABEL_DISABLE);
> + atomic_set(&key->enabled, 1);
> +out:
> + jump_label_unlock();
> +}
> +EXPORT_SYMBOL_GPL(static_key_slow_set_true);
> +
> +void static_key_slow_set_false(struct static_key_boolean *key_boolean)
> +{
> + struct static_key *key = (struct static_key *)key_boolean;
> + int enabled;
> +
> + jump_label_lock();
> + enabled = atomic_read(&key->enabled);
> + if (enabled == 0)
> + goto out;
> + WARN(enabled != 1, "%s: key->enabled: %d\n", __func__, enabled);
> + if (!jump_label_get_branch_default(key))
> + jump_label_update(key, JUMP_LABEL_DISABLE);
> + else
> + jump_label_update(key, JUMP_LABEL_ENABLE);
> + atomic_set(&key->enabled, 0);
> +out:
> + jump_label_unlock();
> +}
> +EXPORT_SYMBOL_GPL(static_key_slow_set_false);
> +
> static int addr_conflict(struct jump_entry *entry, void *start, void *end)
> {
> if (entry->code <= (unsigned long)end &&
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/3] sched: fix static keys race in sched_feat
2013-06-28 22:30 ` [PATCH 2/3] sched: fix static keys race in sched_feat jbaron
@ 2013-06-29 3:03 ` Steven Rostedt
0 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2013-06-29 3:03 UTC (permalink / raw)
To: jbaron; +Cc: andi, linux-kernel, mingo, peterz
On Fri, 2013-06-28 at 22:30 +0000, jbaron@akamai.com wrote:
> As pointed out by Andi Kleen, thue usage of static keys can be racy in
"thue"
> sched_feat_disable vs. sched_feat_enable(). Currently, we first check the
> value of keys->enabled, and subsequently update the branch direction. This,
> can be racy and can potentially leave the keys in an inconsistent state.
>
> Fix the race by using static_key_slow_set_true()/false(), which does the the
> check and set using the jump_label_mutex.
>
> Signed-off-by: Jason Baron <jbaron@akamai.com>
-- Steve
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/3] udp: make use of static_key_slow_set_true() interface
2013-06-28 22:30 ` [PATCH 3/3] udp: make use of static_key_slow_set_true() interface jbaron
@ 2013-06-29 3:13 ` Steven Rostedt
2013-07-01 4:20 ` Jason Baron
0 siblings, 1 reply; 17+ messages in thread
From: Steven Rostedt @ 2013-06-29 3:13 UTC (permalink / raw)
To: jbaron; +Cc: andi, linux-kernel, netdev, mingo, peterz, davem
On Fri, 2013-06-28 at 22:30 +0000, jbaron@akamai.com wrote:
> Make use of the simpler API.
Need to make the change log much more descriptive. Never assume someone
in the future that looks up a change to this file will know about other
commits that led to this. Each change log should be sufficient to stand
on its own.
Explain why this patch is needed. And it's not about the use of a
simpler API.
It actually fixes a real bug.
>
> Signed-off-by: Jason Baron <jbaron@akamai.com>
> ---
> net/ipv4/udp.c | 9 ++++-----
> net/ipv6/udp.c | 9 ++++-----
> 2 files changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index 0bf5d39..b8d0029 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -1421,11 +1421,10 @@ static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
>
> }
>
> -static struct static_key udp_encap_needed __read_mostly;
> +static struct static_key_boolean udp_encap_needed __read_mostly;
> void udp_encap_enable(void)
> {
> - if (!static_key_enabled(&udp_encap_needed))
> - static_key_slow_inc(&udp_encap_needed);
> + static_key_slow_set_true(&udp_encap_needed);
> }
> EXPORT_SYMBOL(udp_encap_enable);
>
> @@ -1450,7 +1449,7 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
> goto drop;
> nf_reset(skb);
>
> - if (static_key_false(&udp_encap_needed) && up->encap_type) {
> + if (static_key_false(&udp_encap_needed.key) && up->encap_type) {
I wonder if we should add a static_bool_key_false(), because that added
".key" is a bit confusing.
-- Steve
> int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
>
> /*
> @@ -1773,7 +1772,7 @@ void udp_destroy_sock(struct sock *sk)
> bool slow = lock_sock_fast(sk);
> udp_flush_pending_frames(sk);
> unlock_sock_fast(sk, slow);
> - if (static_key_false(&udp_encap_needed) && up->encap_type) {
> + if (static_key_false(&udp_encap_needed.key) && up->encap_type) {
> void (*encap_destroy)(struct sock *sk);
> encap_destroy = ACCESS_ONCE(up->encap_destroy);
> if (encap_destroy)
> diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
> index 42923b1..94a4091 100644
> --- a/net/ipv6/udp.c
> +++ b/net/ipv6/udp.c
> @@ -573,11 +573,10 @@ static __inline__ void udpv6_err(struct sk_buff *skb,
> __udp6_lib_err(skb, opt, type, code, offset, info, &udp_table);
> }
>
> -static struct static_key udpv6_encap_needed __read_mostly;
> +static struct static_key_boolean udpv6_encap_needed __read_mostly;
> void udpv6_encap_enable(void)
> {
> - if (!static_key_enabled(&udpv6_encap_needed))
> - static_key_slow_inc(&udpv6_encap_needed);
> + static_key_slow_set_true(&udpv6_encap_needed);
> }
> EXPORT_SYMBOL(udpv6_encap_enable);
>
> @@ -590,7 +589,7 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
> if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
> goto drop;
>
> - if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
> + if (static_key_false(&udpv6_encap_needed.key) && up->encap_type) {
> int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
>
> /*
> @@ -1300,7 +1299,7 @@ void udpv6_destroy_sock(struct sock *sk)
> udp_v6_flush_pending_frames(sk);
> release_sock(sk);
>
> - if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
> + if (static_key_false(&udpv6_encap_needed.key) && up->encap_type) {
> void (*encap_destroy)(struct sock *sk);
> encap_destroy = ACCESS_ONCE(up->encap_destroy);
> if (encap_destroy)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2013-06-28 22:30 [PATCH 0/3] static keys: fix test/set races jbaron
` (2 preceding siblings ...)
2013-06-28 22:30 ` [PATCH 3/3] udp: make use of static_key_slow_set_true() interface jbaron
@ 2013-06-29 7:20 ` Ingo Molnar
2013-07-01 4:12 ` Jason Baron
2014-06-24 2:28 ` Steven Rostedt
4 siblings, 1 reply; 17+ messages in thread
From: Ingo Molnar @ 2013-06-29 7:20 UTC (permalink / raw)
To: jbaron; +Cc: rostedt, andi, linux-kernel, peterz
* jbaron@akamai.com <jbaron@akamai.com> wrote:
> Hi,
>
> As pointed out by Andi Kleen, some static key users can be racy because they
> check the value of the key->enabled, and then subsequently update the branch
> direction. A number of call sites have 'higher' level locking that avoids this
> race, but the usage in the scheduler features does not. See:
> http://lkml.indiana.edu/hypermail/linux/kernel/1304.2/01655.html
But that's not an issue at all - switching the scheduler features is for
development and debugging only, and in some cases higher level locking
would be needed to solve it 'properly', beyond what the keys API could
give ...
So this is pretty pointless, sorry, please don't complicate this facility.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2013-06-29 7:20 ` [PATCH 0/3] static keys: fix test/set races Ingo Molnar
@ 2013-07-01 4:12 ` Jason Baron
2013-07-02 8:03 ` Peter Zijlstra
0 siblings, 1 reply; 17+ messages in thread
From: Jason Baron @ 2013-07-01 4:12 UTC (permalink / raw)
To: Ingo Molnar
Cc: rostedt@goodmis.org, andi@firstfloor.org,
linux-kernel@vger.kernel.org, peterz@infradead.org
On 06/29/2013 03:20 AM, Ingo Molnar wrote:
> * jbaron@akamai.com <jbaron@akamai.com> wrote:
>
>> Hi,
>>
>> As pointed out by Andi Kleen, some static key users can be racy because they
>> check the value of the key->enabled, and then subsequently update the branch
>> direction. A number of call sites have 'higher' level locking that avoids this
>> race, but the usage in the scheduler features does not. See:
>> http://lkml.indiana.edu/hypermail/linux/kernel/1304.2/01655.html
> But that's not an issue at all - switching the scheduler features is for
> development and debugging only, and in some cases higher level locking
> would be needed to solve it 'properly', beyond what the keys API could
> give ...
>
> So this is pretty pointless, sorry, please don't complicate this facility.
>
> Thanks,
>
> Ingo
Hi Ingo,
Yes, I agree that 'higher' level locking may be required for some
callers of the newly proposed interface. However, I do think that the
static_key_slow_set_true()/false() provides a nice abstraction for some
callers, while addressing test/set() races, by making that sequence atomic.
I view the proposed inteface of set_true()/set_false() as somewhat
analogous to an atomic_set() call. In the same way, the current
static_key_slow_inc()/dec() are analogous to atomic_inc()/dec().
It arguably makes the code code a bit more readable, transforming
sequences such as:
if (!static_key_enabled(&control_var))
static_key_slow_inc(&control_var);
into:
static_key_slow_set_true(&control_var);
I see at least 3 users of static_keys in the tree which I think would
benefit from this transformation. The 2 attached with this series, and
the usage in kernel/tracepoint.c.
Thanks,
-Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/3] udp: make use of static_key_slow_set_true() interface
2013-06-29 3:13 ` Steven Rostedt
@ 2013-07-01 4:20 ` Jason Baron
2013-07-01 14:28 ` Steven Rostedt
0 siblings, 1 reply; 17+ messages in thread
From: Jason Baron @ 2013-07-01 4:20 UTC (permalink / raw)
To: Steven Rostedt
Cc: andi@firstfloor.org, linux-kernel@vger.kernel.org,
netdev@vger.kernel.org, mingo@kernel.org, peterz@infradead.org,
davem@davemloft.net
On 06/28/2013 11:13 PM, Steven Rostedt wrote:
> On Fri, 2013-06-28 at 22:30 +0000, jbaron@akamai.com wrote:
>> Make use of the simpler API.
> Need to make the change log much more descriptive. Never assume someone
> in the future that looks up a change to this file will know about other
> commits that led to this. Each change log should be sufficient to stand
> on its own.
>
> Explain why this patch is needed. And it's not about the use of a
> simpler API.
>
> It actually fixes a real bug.
>
>
>> Signed-off-by: Jason Baron <jbaron@akamai.com>
>> ---
>> net/ipv4/udp.c | 9 ++++-----
>> net/ipv6/udp.c | 9 ++++-----
>> 2 files changed, 8 insertions(+), 10 deletions(-)
>>
>> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
>> index 0bf5d39..b8d0029 100644
>> --- a/net/ipv4/udp.c
>> +++ b/net/ipv4/udp.c
>> @@ -1421,11 +1421,10 @@ static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
>>
>> }
>>
>> -static struct static_key udp_encap_needed __read_mostly;
>> +static struct static_key_boolean udp_encap_needed __read_mostly;
>> void udp_encap_enable(void)
>> {
>> - if (!static_key_enabled(&udp_encap_needed))
>> - static_key_slow_inc(&udp_encap_needed);
>> + static_key_slow_set_true(&udp_encap_needed);
>> }
>> EXPORT_SYMBOL(udp_encap_enable);
>>
>> @@ -1450,7 +1449,7 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
>> goto drop;
>> nf_reset(skb);
>>
>> - if (static_key_false(&udp_encap_needed) && up->encap_type) {
>> + if (static_key_false(&udp_encap_needed.key) && up->encap_type) {
> I wonder if we should add a static_bool_key_false(), because that added
> ".key" is a bit confusing.
>
> -- Steve
>
Yeah - that is sort of ugly, but it avoids introducing a new branch API
call. That said, a 'static_bool_key_false()' would probably be a simple
wrapper function.
-Jason
>> int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
>>
>> /*
>> @@ -1773,7 +1772,7 @@ void udp_destroy_sock(struct sock *sk)
>> bool slow = lock_sock_fast(sk);
>> udp_flush_pending_frames(sk);
>> unlock_sock_fast(sk, slow);
>> - if (static_key_false(&udp_encap_needed) && up->encap_type) {
>> + if (static_key_false(&udp_encap_needed.key) && up->encap_type) {
>> void (*encap_destroy)(struct sock *sk);
>> encap_destroy = ACCESS_ONCE(up->encap_destroy);
>> if (encap_destroy)
>> diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
>> index 42923b1..94a4091 100644
>> --- a/net/ipv6/udp.c
>> +++ b/net/ipv6/udp.c
>> @@ -573,11 +573,10 @@ static __inline__ void udpv6_err(struct sk_buff *skb,
>> __udp6_lib_err(skb, opt, type, code, offset, info, &udp_table);
>> }
>>
>> -static struct static_key udpv6_encap_needed __read_mostly;
>> +static struct static_key_boolean udpv6_encap_needed __read_mostly;
>> void udpv6_encap_enable(void)
>> {
>> - if (!static_key_enabled(&udpv6_encap_needed))
>> - static_key_slow_inc(&udpv6_encap_needed);
>> + static_key_slow_set_true(&udpv6_encap_needed);
>> }
>> EXPORT_SYMBOL(udpv6_encap_enable);
>>
>> @@ -590,7 +589,7 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
>> if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
>> goto drop;
>>
>> - if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
>> + if (static_key_false(&udpv6_encap_needed.key) && up->encap_type) {
>> int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
>>
>> /*
>> @@ -1300,7 +1299,7 @@ void udpv6_destroy_sock(struct sock *sk)
>> udp_v6_flush_pending_frames(sk);
>> release_sock(sk);
>>
>> - if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
>> + if (static_key_false(&udpv6_encap_needed.key) && up->encap_type) {
>> void (*encap_destroy)(struct sock *sk);
>> encap_destroy = ACCESS_ONCE(up->encap_destroy);
>> if (encap_destroy)
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/3] udp: make use of static_key_slow_set_true() interface
2013-07-01 4:20 ` Jason Baron
@ 2013-07-01 14:28 ` Steven Rostedt
0 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2013-07-01 14:28 UTC (permalink / raw)
To: Jason Baron
Cc: andi@firstfloor.org, linux-kernel@vger.kernel.org,
netdev@vger.kernel.org, mingo@kernel.org, peterz@infradead.org,
davem@davemloft.net
On Mon, 2013-07-01 at 00:20 -0400, Jason Baron wrote:
>
> >> @@ -1450,7 +1449,7 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
> >> goto drop;
> >> nf_reset(skb);
> >>
> >> - if (static_key_false(&udp_encap_needed) && up->encap_type) {
> >> + if (static_key_false(&udp_encap_needed.key) && up->encap_type) {
> > I wonder if we should add a static_bool_key_false(), because that added
> > ".key" is a bit confusing.
> >
> > -- Steve
> >
>
> Yeah - that is sort of ugly, but it avoids introducing a new branch API
> call. That said, a 'static_bool_key_false()' would probably be a simple
> wrapper function.
My main concern was to keep true/false separate from counters to
document how this is being used. A simple wrapper would work too.
-- Steve
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2013-07-01 4:12 ` Jason Baron
@ 2013-07-02 8:03 ` Peter Zijlstra
2013-07-02 9:38 ` Ingo Molnar
0 siblings, 1 reply; 17+ messages in thread
From: Peter Zijlstra @ 2013-07-02 8:03 UTC (permalink / raw)
To: Jason Baron
Cc: Ingo Molnar, rostedt@goodmis.org, andi@firstfloor.org,
linux-kernel@vger.kernel.org
On Mon, Jul 01, 2013 at 12:12:11AM -0400, Jason Baron wrote:
>
> Yes, I agree that 'higher' level locking may be required for some callers of
> the newly proposed interface. However, I do think that the
> static_key_slow_set_true()/false() provides a nice abstraction for some
> callers, while addressing test/set() races, by making that sequence atomic.
>
> I view the proposed inteface of set_true()/set_false() as somewhat analogous
> to an atomic_set() call. In the same way, the current
> static_key_slow_inc()/dec() are analogous to atomic_inc()/dec().
>
> It arguably makes the code code a bit more readable, transforming sequences
> such as:
>
> if (!static_key_enabled(&control_var))
> static_key_slow_inc(&control_var);
>
> into:
>
> static_key_slow_set_true(&control_var);
>
>
> I see at least 3 users of static_keys in the tree which I think would
> benefit from this transformation. The 2 attached with this series, and the
> usage in kernel/tracepoint.c.
I tend to agree with Jason here. I also dont' think the scheduler needs this;
but the new API is more usable for binary switches as opposed to the refcount
thing.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2013-07-02 8:03 ` Peter Zijlstra
@ 2013-07-02 9:38 ` Ingo Molnar
0 siblings, 0 replies; 17+ messages in thread
From: Ingo Molnar @ 2013-07-02 9:38 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Jason Baron, rostedt@goodmis.org, andi@firstfloor.org,
linux-kernel@vger.kernel.org
* Peter Zijlstra <peterz@infradead.org> wrote:
> On Mon, Jul 01, 2013 at 12:12:11AM -0400, Jason Baron wrote:
> >
> > Yes, I agree that 'higher' level locking may be required for some callers of
> > the newly proposed interface. However, I do think that the
> > static_key_slow_set_true()/false() provides a nice abstraction for some
> > callers, while addressing test/set() races, by making that sequence atomic.
> >
> > I view the proposed inteface of set_true()/set_false() as somewhat analogous
> > to an atomic_set() call. In the same way, the current
> > static_key_slow_inc()/dec() are analogous to atomic_inc()/dec().
> >
> > It arguably makes the code code a bit more readable, transforming sequences
> > such as:
> >
> > if (!static_key_enabled(&control_var))
> > static_key_slow_inc(&control_var);
> >
> > into:
> >
> > static_key_slow_set_true(&control_var);
> >
> >
> > I see at least 3 users of static_keys in the tree which I think would
> > benefit from this transformation. The 2 attached with this series, and the
> > usage in kernel/tracepoint.c.
>
> I tend to agree with Jason here. I also dont' think the scheduler needs
> this; but the new API is more usable for binary switches as opposed to
> the refcount thing.
Ok - no objections then from me either.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2013-06-28 22:30 [PATCH 0/3] static keys: fix test/set races jbaron
` (3 preceding siblings ...)
2013-06-29 7:20 ` [PATCH 0/3] static keys: fix test/set races Ingo Molnar
@ 2014-06-24 2:28 ` Steven Rostedt
2014-06-24 2:41 ` Jason Baron
2014-06-30 21:43 ` Jason Baron
4 siblings, 2 replies; 17+ messages in thread
From: Steven Rostedt @ 2014-06-24 2:28 UTC (permalink / raw)
To: jbaron; +Cc: andi, linux-kernel, mingo, peterz
Cleaning out my INBOX I found this patch series. It seems to have been
forgotten about. It ended up with Ingo and Peter agreeing with the way
things should be done and I thought Jason was going to send an update.
But that seems to never have happened.
Does this patch series still look legit? Should we pursue it?
-- Steve
On Fri, 28 Jun 2013 22:30:28 +0000 (GMT)
jbaron@akamai.com wrote:
> Hi,
>
> As pointed out by Andi Kleen, some static key users can be racy because they
> check the value of the key->enabled, and then subsequently update the branch
> direction. A number of call sites have 'higher' level locking that avoids this
> race, but the usage in the scheduler features does not. See:
> http://lkml.indiana.edu/hypermail/linux/kernel/1304.2/01655.html
>
> Thus, introduce a new API that does the check and set under the
> 'jump_label_mutex'. This should also allow to simplify call sites a bit.
>
> Users of static keys should use either the inc/dec or the set_true/set_false
> API.
>
> Thanks,
>
> -Jason
>
>
> Jason Baron (3):
> static_keys: Add a static_key_slow_set_true()/false() interface
> sched: fix static keys race in sched_feat
> udp: make use of static_key_slow_set_true() interface
>
> Documentation/static-keys.txt | 8 ++++++++
> include/linux/jump_label.h | 30 ++++++++++++++++++++++++++++++
> kernel/jump_label.c | 40 ++++++++++++++++++++++++++++++++++++++++
> kernel/sched/core.c | 12 +++++-------
> kernel/sched/sched.h | 10 +++++-----
> net/ipv4/udp.c | 9 ++++-----
> net/ipv6/udp.c | 9 ++++-----
> 7 files changed, 96 insertions(+), 22 deletions(-)
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2014-06-24 2:28 ` Steven Rostedt
@ 2014-06-24 2:41 ` Jason Baron
2014-06-30 21:43 ` Jason Baron
1 sibling, 0 replies; 17+ messages in thread
From: Jason Baron @ 2014-06-24 2:41 UTC (permalink / raw)
To: Steven Rostedt
Cc: andi@firstfloor.org, linux-kernel@vger.kernel.org,
mingo@kernel.org, peterz@infradead.org
On 06/23/2014 10:28 PM, Steven Rostedt wrote:
> Cleaning out my INBOX I found this patch series. It seems to have been
> forgotten about. It ended up with Ingo and Peter agreeing with the way
> things should be done and I thought Jason was going to send an update.
> But that seems to never have happened.
>
> Does this patch series still look legit? Should we pursue it?
>
> -- Steve
Hi Steve,
I still think its a reasonable addition - I'll re-post later
this week for comments.
Thanks,
-Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2014-06-24 2:28 ` Steven Rostedt
2014-06-24 2:41 ` Jason Baron
@ 2014-06-30 21:43 ` Jason Baron
2014-06-30 22:36 ` Steven Rostedt
1 sibling, 1 reply; 17+ messages in thread
From: Jason Baron @ 2014-06-30 21:43 UTC (permalink / raw)
To: Steven Rostedt
Cc: andi@firstfloor.org, linux-kernel@vger.kernel.org,
mingo@kernel.org, peterz@infradead.org
Hi Steve,
I took a closer look at this, and I'm thinking now that its simpler to
just take the &inode->i_mutex in sched_feat_write(), surrounding the
test/set jump_label call. It should be about 3 lines :)
I started re-working this with the patches in this series and it just
seemed like a lot of code for only 1 current use-case. (The udp case
doesn't appear to disable the branch and thus is not racy.)
So I've swung back to what Ingo originally said - I can test/post the
suggested 3-line patch, unless there are other thoughts...
Thanks,
-Jason
On 06/23/2014 10:28 PM, Steven Rostedt wrote:
>
> Cleaning out my INBOX I found this patch series. It seems to have been
> forgotten about. It ended up with Ingo and Peter agreeing with the way
> things should be done and I thought Jason was going to send an update.
> But that seems to never have happened.
>
> Does this patch series still look legit? Should we pursue it?
>
> -- Steve
>
> On Fri, 28 Jun 2013 22:30:28 +0000 (GMT)
> jbaron@akamai.com wrote:
>
>> Hi,
>>
>> As pointed out by Andi Kleen, some static key users can be racy because they
>> check the value of the key->enabled, and then subsequently update the branch
>> direction. A number of call sites have 'higher' level locking that avoids this
>> race, but the usage in the scheduler features does not. See:
>> http://lkml.indiana.edu/hypermail/linux/kernel/1304.2/01655.html
>>
>> Thus, introduce a new API that does the check and set under the
>> 'jump_label_mutex'. This should also allow to simplify call sites a bit.
>>
>> Users of static keys should use either the inc/dec or the set_true/set_false
>> API.
>>
>> Thanks,
>>
>> -Jason
>>
>>
>> Jason Baron (3):
>> static_keys: Add a static_key_slow_set_true()/false() interface
>> sched: fix static keys race in sched_feat
>> udp: make use of static_key_slow_set_true() interface
>>
>> Documentation/static-keys.txt | 8 ++++++++
>> include/linux/jump_label.h | 30 ++++++++++++++++++++++++++++++
>> kernel/jump_label.c | 40 ++++++++++++++++++++++++++++++++++++++++
>> kernel/sched/core.c | 12 +++++-------
>> kernel/sched/sched.h | 10 +++++-----
>> net/ipv4/udp.c | 9 ++++-----
>> net/ipv6/udp.c | 9 ++++-----
>> 7 files changed, 96 insertions(+), 22 deletions(-)
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/3] static keys: fix test/set races
2014-06-30 21:43 ` Jason Baron
@ 2014-06-30 22:36 ` Steven Rostedt
0 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2014-06-30 22:36 UTC (permalink / raw)
To: Jason Baron
Cc: andi@firstfloor.org, linux-kernel@vger.kernel.org,
mingo@kernel.org, peterz@infradead.org
On Mon, 30 Jun 2014 17:43:50 -0400
Jason Baron <jbaron@akamai.com> wrote:
> Hi Steve,
>
> I took a closer look at this, and I'm thinking now that its simpler to
> just take the &inode->i_mutex in sched_feat_write(), surrounding the
> test/set jump_label call. It should be about 3 lines :)
Smaller patches are always nice.
>
> I started re-working this with the patches in this series and it just
> seemed like a lot of code for only 1 current use-case. (The udp case
> doesn't appear to disable the branch and thus is not racy.)
>
> So I've swung back to what Ingo originally said - I can test/post the
> suggested 3-line patch, unless there are other thoughts...
Well, the final decision lies with Peter and Ingo. I only noticed it
while cleaning out my inbox and didn't want it to be forgotten.
-- Steve
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2014-06-30 22:36 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-28 22:30 [PATCH 0/3] static keys: fix test/set races jbaron
2013-06-28 22:30 ` [PATCH 1/3] static_keys: Add a static_key_slow_set_true()/false() interface jbaron
2013-06-29 3:00 ` Steven Rostedt
2013-06-28 22:30 ` [PATCH 2/3] sched: fix static keys race in sched_feat jbaron
2013-06-29 3:03 ` Steven Rostedt
2013-06-28 22:30 ` [PATCH 3/3] udp: make use of static_key_slow_set_true() interface jbaron
2013-06-29 3:13 ` Steven Rostedt
2013-07-01 4:20 ` Jason Baron
2013-07-01 14:28 ` Steven Rostedt
2013-06-29 7:20 ` [PATCH 0/3] static keys: fix test/set races Ingo Molnar
2013-07-01 4:12 ` Jason Baron
2013-07-02 8:03 ` Peter Zijlstra
2013-07-02 9:38 ` Ingo Molnar
2014-06-24 2:28 ` Steven Rostedt
2014-06-24 2:41 ` Jason Baron
2014-06-30 21:43 ` Jason Baron
2014-06-30 22:36 ` Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox