* [PATCH v3 0/2] livepatch: Add using attribute to klp_func for using function
@ 2024-08-22 3:01 Wardenjohn
2024-08-22 3:01 ` [PATCH v3 1/2] Introduce klp_ops into klp_func structure Wardenjohn
2024-08-22 3:01 ` [PATCH v3 2/2] livepatch: Add using attribute to klp_func for using function show Wardenjohn
0 siblings, 2 replies; 7+ messages in thread
From: Wardenjohn @ 2024-08-22 3:01 UTC (permalink / raw)
To: jpoimboe, mbenes, jikos, pmladek, joe.lawrence
Cc: live-patching, linux-kernel
This patch introduce one sysfs attribute of "using" to klp_func.
For example, if there are serval patches make changes to function
"meminfo_proc_show", the attribute "enabled" of all the patch is 1.
With this attribute, we can easily know the version enabling belongs
to which patch.
Changes v1 => v2:
1. reset using type from bool to int
2. the value of "using" contains 3 types, 0(not used),
1(using), -1(unknown). (As suggested by Petr).
3. add the process of transition state (-1 state). (suggested by Petr and Miroslav)
v1: https://lore.kernel.org/live-patching/1A0E6D0A-4245-4F44-8667-CDD86A925347@gmail.com/T/#t
Changes v2 => v3:
1. Move klp_ops defintion into linux/livepatch.h
2. Move klp_ops pointer into klp_func (Suggested by Petr)
3. Rewrite function of klp_find_ops
4. Adjust the newer logic of "using" feature
In klp_complete_transition, if patch state is KLP_TRANSITION_PATCHED, we can get the
function's next node. If the next node is itself, that means there is only one function.
If next node is not equal to the current function node, the next node "using" state
can be changed into "0".
BTW, I remember to use ./script/checkpatch.pl to check my patches..Hah.. :)
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v3 1/2] Introduce klp_ops into klp_func structure
2024-08-22 3:01 [PATCH v3 0/2] livepatch: Add using attribute to klp_func for using function Wardenjohn
@ 2024-08-22 3:01 ` Wardenjohn
2024-08-25 4:48 ` Christoph Hellwig
2024-08-22 3:01 ` [PATCH v3 2/2] livepatch: Add using attribute to klp_func for using function show Wardenjohn
1 sibling, 1 reply; 7+ messages in thread
From: Wardenjohn @ 2024-08-22 3:01 UTC (permalink / raw)
To: jpoimboe, mbenes, jikos, pmladek, joe.lawrence
Cc: live-patching, linux-kernel, Wardenjohn
1. Move klp_ops into klp_func structure.
Rewrite the logic of klp_find_ops and
other logic to get klp_ops of a function.
2. Move definition of struct klp_ops into
include/linux/livepatch.h
Signed-off-by: Wardenjohn <zhangwarden@gmail.com>
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 51a258c24ff5..d874aecc817b 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -22,6 +22,25 @@
#define KLP_TRANSITION_UNPATCHED 0
#define KLP_TRANSITION_PATCHED 1
+/**
+ * struct klp_ops - structure for tracking registered ftrace ops structs
+ *
+ * A single ftrace_ops is shared between all enabled replacement functions
+ * (klp_func structs) which have the same old_func. This allows the switch
+ * between function versions to happen instantaneously by updating the klp_ops
+ * struct's func_stack list. The winner is the klp_func at the top of the
+ * func_stack (front of the list).
+ *
+ * @node: node for the global klp_ops list
+ * @func_stack: list head for the stack of klp_func's (active func is on top)
+ * @fops: registered ftrace ops struct
+ */
+struct klp_ops {
+ struct list_head node;
+ struct list_head func_stack;
+ struct ftrace_ops fops;
+};
+
/**
* struct klp_func - function structure for live patching
* @old_name: name of the function to be patched
@@ -32,6 +51,7 @@
* @kobj: kobject for sysfs resources
* @node: list node for klp_object func_list
* @stack_node: list node for klp_ops func_stack list
+ * @ops: pointer to klp_ops struct for this function
* @old_size: size of the old function
* @new_size: size of the new function
* @nop: temporary patch to use the original code again; dyn. allocated
@@ -71,6 +91,7 @@ struct klp_func {
struct kobject kobj;
struct list_head node;
struct list_head stack_node;
+ struct klp_ops *ops;
unsigned long old_size, new_size;
bool nop;
bool patched;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 52426665eecc..e4572bf34316 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -760,6 +760,8 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
if (!func->old_name)
return -EINVAL;
+ func->ops = NULL;
+
/*
* NOPs get the address later. The patched module must be loaded,
* see klp_init_object_loaded().
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 90408500e5a3..8ab9c35570f4 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -20,18 +20,25 @@
#include "patch.h"
#include "transition.h"
-static LIST_HEAD(klp_ops);
struct klp_ops *klp_find_ops(void *old_func)
{
- struct klp_ops *ops;
+ struct klp_patch *patch;
+ struct klp_object *obj;
struct klp_func *func;
- list_for_each_entry(ops, &klp_ops, node) {
- func = list_first_entry(&ops->func_stack, struct klp_func,
- stack_node);
- if (func->old_func == old_func)
- return ops;
+ klp_for_each_patch(patch) {
+ klp_for_each_object(patch, obj) {
+ klp_for_each_func(obj, func) {
+ /*
+ * Ignore entry where func->ops has not been
+ * assigned yet. It is most likely the one
+ * which is about to be created/added.
+ */
+ if (func->old_func == old_func && func->ops)
+ return func->ops;
+ }
+ }
}
return NULL;
@@ -133,7 +140,7 @@ static void klp_unpatch_func(struct klp_func *func)
if (WARN_ON(!func->old_func))
return;
- ops = klp_find_ops(func->old_func);
+ ops = func->ops;
if (WARN_ON(!ops))
return;
@@ -149,6 +156,7 @@ static void klp_unpatch_func(struct klp_func *func)
list_del_rcu(&func->stack_node);
list_del(&ops->node);
+ func->ops = NULL;
kfree(ops);
} else {
list_del_rcu(&func->stack_node);
@@ -168,7 +176,7 @@ static int klp_patch_func(struct klp_func *func)
if (WARN_ON(func->patched))
return -EINVAL;
- ops = klp_find_ops(func->old_func);
+ ops = func->ops;
if (!ops) {
unsigned long ftrace_loc;
@@ -191,8 +199,6 @@ static int klp_patch_func(struct klp_func *func)
FTRACE_OPS_FL_IPMODIFY |
FTRACE_OPS_FL_PERMANENT;
- list_add(&ops->node, &klp_ops);
-
INIT_LIST_HEAD(&ops->func_stack);
list_add_rcu(&func->stack_node, &ops->func_stack);
@@ -211,7 +217,7 @@ static int klp_patch_func(struct klp_func *func)
goto err;
}
-
+ func->ops = ops;
} else {
list_add_rcu(&func->stack_node, &ops->func_stack);
}
@@ -224,6 +230,7 @@ static int klp_patch_func(struct klp_func *func)
list_del_rcu(&func->stack_node);
list_del(&ops->node);
kfree(ops);
+ func->ops = NULL;
return ret;
}
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index d5f2fbe373e0..21d0d20b7189 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -6,25 +6,6 @@
#include <linux/list.h>
#include <linux/ftrace.h>
-/**
- * struct klp_ops - structure for tracking registered ftrace ops structs
- *
- * A single ftrace_ops is shared between all enabled replacement functions
- * (klp_func structs) which have the same old_func. This allows the switch
- * between function versions to happen instantaneously by updating the klp_ops
- * struct's func_stack list. The winner is the klp_func at the top of the
- * func_stack (front of the list).
- *
- * @node: node for the global klp_ops list
- * @func_stack: list head for the stack of klp_func's (active func is on top)
- * @fops: registered ftrace ops struct
- */
-struct klp_ops {
- struct list_head node;
- struct list_head func_stack;
- struct ftrace_ops fops;
-};
-
struct klp_ops *klp_find_ops(void *old_func);
int klp_patch_object(struct klp_object *obj);
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index ba069459c101..d9a3f9c7a93b 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -230,7 +230,7 @@ static int klp_check_stack_func(struct klp_func *func, unsigned long *entries,
* Check for the to-be-patched function
* (the previous func).
*/
- ops = klp_find_ops(func->old_func);
+ ops = func->ops;
if (list_is_singular(&ops->func_stack)) {
/* original function */
--
2.18.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v3 2/2] livepatch: Add using attribute to klp_func for using function show
2024-08-22 3:01 [PATCH v3 0/2] livepatch: Add using attribute to klp_func for using function Wardenjohn
2024-08-22 3:01 ` [PATCH v3 1/2] Introduce klp_ops into klp_func structure Wardenjohn
@ 2024-08-22 3:01 ` Wardenjohn
1 sibling, 0 replies; 7+ messages in thread
From: Wardenjohn @ 2024-08-22 3:01 UTC (permalink / raw)
To: jpoimboe, mbenes, jikos, pmladek, joe.lawrence
Cc: live-patching, linux-kernel, Wardenjohn
One system may contains more than one livepatch module. We can see
which patch is enabled. If some patches applied to one system
modifing the same function, livepatch will use the function enabled
on top of the function stack. However, we can not excatly know
which function of which patch is now enabling.
This patch introduce one sysfs attribute of "using" to klp_func.
For example, if there are serval patches make changes to function
"meminfo_proc_show", the attribute "enabled" of all the patch is 1.
With this attribute, we can easily know the version enabling belongs
to which patch.
The "using" is set as three state. 0 is disabled, it means that this
version of function is not used. 1 is running, it means that this
version of function is now running. -1 is unknown, it means that
this version of function is under transition, some task is still
chaning their running version of this function.
cat /sys/kernel/livepatch/<patch1>/<object1>/<function1,sympos>/using -> 0
means that the function1 of patch1 is disabled.
cat /sys/kernel/livepatch/<patchN>/<object1>/<function1,sympos>/using -> 1
means that the function1 of patchN is enabled.
cat /sys/kernel/livepatch/<patchN>/<object1>/<function1,sympos>/using -> -1
means that the function1 of patchN is under transition and unknown.
Signed-off-by: Wardenjohn <zhangwarden@gmail.com>
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index d874aecc817b..5a6bacebd66f 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -57,6 +57,7 @@ struct klp_ops {
* @nop: temporary patch to use the original code again; dyn. allocated
* @patched: the func has been added to the klp_ops list
* @transition: the func is currently being applied or reverted
+ * @using: the func is on top of the function stack that is using
*
* The patched and transition variables define the func's patching state. When
* patching, a func is always in one of the following states:
@@ -72,6 +73,12 @@ struct klp_ops {
* patched=1 transition=1: patched, may be visible to some tasks
* patched=0 transition=1: unpatched, temporary ending state
* patched=0 transition=0: unpatched
+ *
+ * 'using' flag is used to show if this function is now using
+ *
+ * using=-1 (unknown): the function is now under transition
+ * using=1 (using): the function is now running
+ * using=0 (not used): the function is not used
*/
struct klp_func {
/* external */
@@ -96,6 +103,7 @@ struct klp_func {
bool nop;
bool patched;
bool transition;
+ int using;
};
struct klp_object;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index e4572bf34316..bc1b2085e3c5 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -349,6 +349,7 @@ int klp_apply_section_relocs(struct module *pmod, Elf_Shdr *sechdrs,
* /sys/kernel/livepatch/<patch>/<object>
* /sys/kernel/livepatch/<patch>/<object>/patched
* /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
+ * /sys/kernel/livepatch/<patch>/<object>/<function,sympos>/using
*/
static int __klp_disable_patch(struct klp_patch *patch);
@@ -470,6 +471,22 @@ static struct attribute *klp_object_attrs[] = {
};
ATTRIBUTE_GROUPS(klp_object);
+static ssize_t using_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct klp_func *func;
+
+ func = container_of(kobj, struct klp_func, kobj);
+ return sysfs_emit(buf, "%d\n", func->using);
+}
+
+static struct kobj_attribute using_kobj_attr = __ATTR_RO(using);
+static struct attribute *klp_func_attrs[] = {
+ &using_kobj_attr.attr,
+ NULL,
+};
+ATTRIBUTE_GROUPS(klp_func);
+
static void klp_free_object_dynamic(struct klp_object *obj)
{
kfree(obj->name);
@@ -631,6 +648,7 @@ static void klp_kobj_release_func(struct kobject *kobj)
static const struct kobj_type klp_ktype_func = {
.release = klp_kobj_release_func,
.sysfs_ops = &kobj_sysfs_ops,
+ .default_groups = klp_func_groups,
};
static void __klp_free_funcs(struct klp_object *obj, bool nops_only)
@@ -775,6 +793,7 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
INIT_LIST_HEAD(&func->stack_node);
func->patched = false;
func->transition = false;
+ func->using = 0;
/* The format for the sysfs directory is <function,sympos> where sympos
* is the nth occurrence of this symbol in kallsyms for the patched
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 8ab9c35570f4..5138cedfcfaa 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -134,6 +134,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
static void klp_unpatch_func(struct klp_func *func)
{
struct klp_ops *ops;
+ struct klp_func *stack_top_func;
if (WARN_ON(!func->patched))
return;
@@ -160,6 +161,10 @@ static void klp_unpatch_func(struct klp_func *func)
kfree(ops);
} else {
list_del_rcu(&func->stack_node);
+ // the previous function is deleted, the stack top is under transition
+ stack_top_func = list_first_entry(&ops->func_stack, struct klp_func,
+ stack_node);
+ stack_top_func->using = -1;
}
func->patched = false;
@@ -168,6 +173,7 @@ static void klp_unpatch_func(struct klp_func *func)
static int klp_patch_func(struct klp_func *func)
{
struct klp_ops *ops;
+ struct klp_func *stack_top_func;
int ret;
if (WARN_ON(!func->old_func))
@@ -219,10 +225,16 @@ static int klp_patch_func(struct klp_func *func)
func->ops = ops;
} else {
+ // stack_top_func is going to be in transition
+ stack_top_func = list_first_entry(&ops->func_stack, struct klp_func,
+ stack_node);
+ stack_top_func->using = -1;
+ // The new patched function is the one enabling
list_add_rcu(&func->stack_node, &ops->func_stack);
}
func->patched = true;
+ func->using = -1;
return 0;
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index d9a3f9c7a93b..365dac635efe 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -90,8 +90,9 @@ static void klp_synchronize_transition(void)
static void klp_complete_transition(void)
{
struct klp_object *obj;
- struct klp_func *func;
+ struct klp_func *func, *next_func, *stack_top_func;
struct task_struct *g, *task;
+ struct klp_ops *ops;
unsigned int cpu;
pr_debug("'%s': completing %s transition\n",
@@ -119,9 +120,39 @@ static void klp_complete_transition(void)
klp_synchronize_transition();
}
- klp_for_each_object(klp_transition_patch, obj)
- klp_for_each_func(obj, func)
- func->transition = false;
+ /*
+ * The transition patch is finished. The stack top function is now truly
+ * running. The previous function should be set as 0 as none task is
+ * using this function anymore.
+ *
+ * If this is a patching patch, all function is using.
+ * if this patch is unpatching, all function of the func stack top is using
+ */
+ if (klp_target_state == KLP_TRANSITION_PATCHED) {
+ klp_for_each_object(klp_transition_patch, obj) {
+ klp_for_each_func(obj, func) {
+ func->using = 1;
+ func->transition = false;
+ next_func = list_entry_rcu(func->stack_node.next,
+ struct klp_func, stack_node);
+ if (&func->stack_node != &func->ops->func_stack)
+ next_func->using = 0;
+ }
+ }
+ } else {
+ // for the unpatch func, if ops exist, the top of this func is using
+ klp_for_each_object(klp_transition_patch, obj) {
+ klp_for_each_func(obj, func) {
+ func->transition = false;
+ ops = func->ops;
+ if (ops) {
+ stack_top_func = list_first_entry(&ops->func_stack,
+ struct klp_func, stack_node);
+ stack_top_func->using = 1;
+ }
+ }
+ }
+ }
/* Prevent klp_ftrace_handler() from seeing KLP_TRANSITION_IDLE state */
if (klp_target_state == KLP_TRANSITION_PATCHED)
--
2.18.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v3 1/2] Introduce klp_ops into klp_func structure
2024-08-22 3:01 ` [PATCH v3 1/2] Introduce klp_ops into klp_func structure Wardenjohn
@ 2024-08-25 4:48 ` Christoph Hellwig
2024-08-25 14:37 ` zhang warden
0 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2024-08-25 4:48 UTC (permalink / raw)
To: Wardenjohn
Cc: jpoimboe, mbenes, jikos, pmladek, joe.lawrence, live-patching,
linux-kernel
On Thu, Aug 22, 2024 at 11:01:58AM +0800, Wardenjohn wrote:
> 1. Move klp_ops into klp_func structure.
> Rewrite the logic of klp_find_ops and
> other logic to get klp_ops of a function.
>
> 2. Move definition of struct klp_ops into
> include/linux/livepatch.h
Why?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 1/2] Introduce klp_ops into klp_func structure
2024-08-25 4:48 ` Christoph Hellwig
@ 2024-08-25 14:37 ` zhang warden
2024-08-25 20:17 ` Jiri Kosina
0 siblings, 1 reply; 7+ messages in thread
From: zhang warden @ 2024-08-25 14:37 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Josh Poimboeuf, Miroslav Benes, jikos, pmladek, joe.lawrence,
live-patching, linux-kernel
> On Aug 25, 2024, at 12:48, Christoph Hellwig <hch@infradead.org> wrote:
>
> On Thu, Aug 22, 2024 at 11:01:58AM +0800, Wardenjohn wrote:
>> 1. Move klp_ops into klp_func structure.
>> Rewrite the logic of klp_find_ops and
>> other logic to get klp_ops of a function.
>>
>> 2. Move definition of struct klp_ops into
>> include/linux/livepatch.h
>
> Why?
>
Hi, Christoph.
When introducing feature of "using", we should handle the klp_ops check.
In order to get klp_ops from transition, we may have more complex logic for
checking.
If we move klp_ops into klp_func. We can remove the global list head of klp_ops.
What's more, there are some code in livepatch should get function's ops.
With this feature, we don't need the original search to the one ops.
It can be simple and straightforward.
Regards,
Wardenjohn
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 1/2] Introduce klp_ops into klp_func structure
2024-08-25 14:37 ` zhang warden
@ 2024-08-25 20:17 ` Jiri Kosina
2024-08-26 3:31 ` zhang warden
0 siblings, 1 reply; 7+ messages in thread
From: Jiri Kosina @ 2024-08-25 20:17 UTC (permalink / raw)
To: zhang warden
Cc: Christoph Hellwig, Josh Poimboeuf, Miroslav Benes, pmladek,
joe.lawrence, live-patching, linux-kernel
On Sun, 25 Aug 2024, zhang warden wrote:
> >> 1. Move klp_ops into klp_func structure.
> >> Rewrite the logic of klp_find_ops and
> >> other logic to get klp_ops of a function.
> >>
> >> 2. Move definition of struct klp_ops into
> >> include/linux/livepatch.h
> >
> > Why?
> >
>
> Hi, Christoph.
>
> When introducing feature of "using", we should handle the klp_ops check.
> In order to get klp_ops from transition, we may have more complex logic for
> checking.
>
> If we move klp_ops into klp_func. We can remove the global list head of klp_ops.
> What's more, there are some code in livepatch should get function's ops.
> With this feature, we don't need the original search to the one ops.
> It can be simple and straightforward.
I believe that Christoph's "Why?" in fact meant "please include the
rationale for the changes being made (such as the above) in the patch
changelog" :)
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 1/2] Introduce klp_ops into klp_func structure
2024-08-25 20:17 ` Jiri Kosina
@ 2024-08-26 3:31 ` zhang warden
0 siblings, 0 replies; 7+ messages in thread
From: zhang warden @ 2024-08-26 3:31 UTC (permalink / raw)
To: Jiri Kosina
Cc: Christoph Hellwig, Josh Poimboeuf, Miroslav Benes, Petr Mladek,
Joe Lawrence, live-patching, linux-kernel
> On Aug 26, 2024, at 04:17, Jiri Kosina <jikos@kernel.org> wrote:
>
> I believe that Christoph's "Why?" in fact meant "please include the
> rationale for the changes being made (such as the above) in the patch
> changelog" :)
>
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs
>
Hi Kosina.
OK, the relation on this patch will update into the commit log in next version.
Thanks,
Wardenjohn.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-08-26 3:31 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-22 3:01 [PATCH v3 0/2] livepatch: Add using attribute to klp_func for using function Wardenjohn
2024-08-22 3:01 ` [PATCH v3 1/2] Introduce klp_ops into klp_func structure Wardenjohn
2024-08-25 4:48 ` Christoph Hellwig
2024-08-25 14:37 ` zhang warden
2024-08-25 20:17 ` Jiri Kosina
2024-08-26 3:31 ` zhang warden
2024-08-22 3:01 ` [PATCH v3 2/2] livepatch: Add using attribute to klp_func for using function show Wardenjohn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).