linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] support for stacked patches
@ 2015-01-20 15:26 Josh Poimboeuf
  2015-01-20 15:26 ` [PATCH v2 1/2] livepatch: enforce patch stacking semantics Josh Poimboeuf
  2015-01-20 15:26 ` [PATCH v2 2/2] livepatch: support for repatching a function Josh Poimboeuf
  0 siblings, 2 replies; 7+ messages in thread
From: Josh Poimboeuf @ 2015-01-20 15:26 UTC (permalink / raw)
  To: Seth Jennings, Jiri Kosina, Vojtech Pavlik; +Cc: live-patching, linux-kernel

Add support for properly stacking patches.  This includes enforcing patch stack
semantics and allowing functions to be repatched.

v2 changes:
- add "enforce stacking semantics" patch
- add documentation comments for new klp_ops struct

Josh Poimboeuf (2):
  livepatch: enforce patch stacking semantics
  livepatch: support for repatching a function

 include/linux/livepatch.h |   4 +-
 kernel/livepatch/core.c   | 180 +++++++++++++++++++++++++++++++++-------------
 2 files changed, 131 insertions(+), 53 deletions(-)

-- 
2.1.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/2] livepatch: enforce patch stacking semantics
  2015-01-20 15:26 [PATCH v2 0/2] support for stacked patches Josh Poimboeuf
@ 2015-01-20 15:26 ` Josh Poimboeuf
  2015-01-20 15:33   ` Jiri Slaby
  2015-01-20 15:26 ` [PATCH v2 2/2] livepatch: support for repatching a function Josh Poimboeuf
  1 sibling, 1 reply; 7+ messages in thread
From: Josh Poimboeuf @ 2015-01-20 15:26 UTC (permalink / raw)
  To: Seth Jennings, Jiri Kosina, Vojtech Pavlik; +Cc: live-patching, linux-kernel

Only allow the topmost patch on the stack to be enabled or disabled, so
that patches can't be removed or added in an arbitrary order.

Suggested-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 kernel/livepatch/core.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 3d9c00b..2401e7f 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -379,6 +379,11 @@ static int __klp_disable_patch(struct klp_patch *patch)
 	struct klp_object *obj;
 	int ret;
 
+	/* enforce stacking: only the last enabled patch can be disabled */
+	if (!list_is_last(&patch->list, &klp_patches) &&
+	    list_next_entry(patch, list)->state == KLP_ENABLED)
+		return -EBUSY;
+
 	pr_notice("disabling patch '%s'\n", patch->mod->name);
 
 	for (obj = patch->objs; obj->funcs; obj++) {
@@ -435,6 +440,11 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	if (WARN_ON(patch->state != KLP_DISABLED))
 		return -EINVAL;
 
+	/* enforce stacking: only the first disabled patch can be enabled */
+	if (patch->list.prev != &klp_patches &&
+	    list_prev_entry(patch, list)->state == KLP_DISABLED)
+		return -EBUSY;
+
 	pr_notice_once("tainting kernel with TAINT_LIVEPATCH\n");
 	add_taint(TAINT_LIVEPATCH, LOCKDEP_STILL_OK);
 
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] livepatch: support for repatching a function
  2015-01-20 15:26 [PATCH v2 0/2] support for stacked patches Josh Poimboeuf
  2015-01-20 15:26 ` [PATCH v2 1/2] livepatch: enforce patch stacking semantics Josh Poimboeuf
@ 2015-01-20 15:26 ` Josh Poimboeuf
  2015-01-20 15:37   ` Jiri Slaby
  1 sibling, 1 reply; 7+ messages in thread
From: Josh Poimboeuf @ 2015-01-20 15:26 UTC (permalink / raw)
  To: Seth Jennings, Jiri Kosina, Vojtech Pavlik; +Cc: live-patching, linux-kernel

Add support for patching a function multiple times.  If multiple patches
affect a function, the function in the most recently enabled patch
"wins".  This enables a cumulative patch upgrade path, where each patch
is a superset of previous patches.

This requires restructuring the data a little bit.  With the current
design, where each klp_func struct has its own ftrace_ops, we'd have to
unregister the old ops and then register the new ops, because
FTRACE_OPS_FL_IPMODIFY prevents us from having two ops registered for
the same function at the same time.  That would leave a regression
window where the function isn't patched at all (not good for a patch
upgrade path).

This patch replaces the per-klp_func ftrace_ops with a global klp_ops
list, with one ftrace_ops per original function.  A single ftrace_ops is
shared between all klp_funcs which have the same old_addr.  This allows
the switch between function versions to happen instantaneously by
updating the klp_ops struct's func_stack list.  The winner is the
klp_func at the top of the func_stack (front of the list).

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 include/linux/livepatch.h |   4 +-
 kernel/livepatch/core.c   | 170 ++++++++++++++++++++++++++++++++--------------
 2 files changed, 121 insertions(+), 53 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 950bc61..f14c6fb 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -40,8 +40,8 @@ enum klp_state {
  * @old_addr:	a hint conveying at what address the old function
  *		can be found (optional, vmlinux patches only)
  * @kobj:	kobject for sysfs resources
- * @fops:	ftrace operations structure
  * @state:	tracks function-level patch application state
+ * @stack_node:	list node for klp_ops func_stack list
  */
 struct klp_func {
 	/* external */
@@ -59,8 +59,8 @@ struct klp_func {
 
 	/* internal */
 	struct kobject kobj;
-	struct ftrace_ops *fops;
 	enum klp_state state;
+	struct list_head stack_node;
 };
 
 /**
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 2401e7f..2c137e8 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -29,17 +29,53 @@
 #include <linux/kallsyms.h>
 #include <linux/livepatch.h>
 
-/*
- * The klp_mutex protects the klp_patches list and state transitions of any
- * structure reachable from the patches list.  References to any structure must
- * be obtained under mutex protection.
+/**
+ * struct klp_ops - structure for tracking registered ftrace ops structs
+ *
+ * A single ftrace_ops is shared between all enabled replacement functions
+ * (klp_func structs) which have the same old_addr.  This allows the switch
+ * between function versions to happen instantaneously by updating the klp_ops
+ * struct's func_stack list.  The winner is the klp_func at the top of the
+ * func_stack (front of the list).
+ *
+ * @node:	node for the global klp_ops list
+ * @func_stack:	list head for the stack of klp_func's (active func is on top)
+ * @fops:	registered ftrace ops struct
  */
+struct klp_ops {
+	struct list_head node;
+	struct list_head func_stack;
+	struct ftrace_ops fops;
+};
 
+/*
+ * The klp_mutex protects the global lists and state transitions of any
+ * structure reachable from them.  References to any structure must be obtained
+ * under mutex protection (except in klp_ftrace_handler(), which uses RCU to
+ * ensure it gets consistent data).
+ */
 static DEFINE_MUTEX(klp_mutex);
+
 static LIST_HEAD(klp_patches);
+static LIST_HEAD(klp_ops);
 
 static struct kobject *klp_root_kobj;
 
+static struct klp_ops *klp_find_ops(unsigned long old_addr)
+{
+	struct klp_ops *ops;
+	struct klp_func *func;
+
+	list_for_each_entry(ops, &klp_ops, node) {
+		func = list_first_entry(&ops->func_stack, struct klp_func,
+					stack_node);
+		if (func->old_addr == old_addr)
+			return ops;
+	}
+
+	return NULL;
+}
+
 static bool klp_is_module(struct klp_object *obj)
 {
 	return obj->name;
@@ -267,16 +303,28 @@ static int klp_write_object_relocations(struct module *pmod,
 
 static void notrace klp_ftrace_handler(unsigned long ip,
 				       unsigned long parent_ip,
-				       struct ftrace_ops *ops,
+				       struct ftrace_ops *fops,
 				       struct pt_regs *regs)
 {
-	struct klp_func *func = ops->private;
+	struct klp_ops *ops;
+	struct klp_func *func;
+
+	ops = container_of(fops, struct klp_ops, fops);
+
+	rcu_read_lock();
+	func = list_first_or_null_rcu(&ops->func_stack, struct klp_func,
+				      stack_node);
+	rcu_read_unlock();
+
+	if (WARN_ON(!func))
+		return;
 
 	klp_arch_set_pc(regs, (unsigned long)func->new_func);
 }
 
 static int klp_disable_func(struct klp_func *func)
 {
+	struct klp_ops *ops;
 	int ret;
 
 	if (WARN_ON(func->state != KLP_ENABLED))
@@ -285,16 +333,28 @@ static int klp_disable_func(struct klp_func *func)
 	if (WARN_ON(!func->old_addr))
 		return -EINVAL;
 
-	ret = unregister_ftrace_function(func->fops);
-	if (ret) {
-		pr_err("failed to unregister ftrace handler for function '%s' (%d)\n",
-		       func->old_name, ret);
-		return ret;
-	}
+	ops = klp_find_ops(func->old_addr);
+	if (WARN_ON(!ops))
+		return -EINVAL;
 
-	ret = ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
-	if (ret)
-		pr_warn("function unregister succeeded but failed to clear the filter\n");
+	if (list_is_singular(&ops->func_stack)) {
+		ret = unregister_ftrace_function(&ops->fops);
+		if (ret) {
+			pr_err("failed to unregister ftrace handler for function '%s' (%d)\n",
+			       func->old_name, ret);
+			return ret;
+		}
+
+		ret = ftrace_set_filter_ip(&ops->fops, func->old_addr, 1, 0);
+		if (ret)
+			pr_warn("function unregister succeeded but failed to clear the filter\n");
+
+		list_del_rcu(&func->stack_node);
+		list_del(&ops->node);
+		kfree(ops);
+	} else {
+		list_del_rcu(&func->stack_node);
+	}
 
 	func->state = KLP_DISABLED;
 
@@ -303,6 +363,7 @@ static int klp_disable_func(struct klp_func *func)
 
 static int klp_enable_func(struct klp_func *func)
 {
+	struct klp_ops *ops;
 	int ret;
 
 	if (WARN_ON(!func->old_addr))
@@ -311,22 +372,50 @@ static int klp_enable_func(struct klp_func *func)
 	if (WARN_ON(func->state != KLP_DISABLED))
 		return -EINVAL;
 
-	ret = ftrace_set_filter_ip(func->fops, func->old_addr, 0, 0);
-	if (ret) {
-		pr_err("failed to set ftrace filter for function '%s' (%d)\n",
-		       func->old_name, ret);
-		return ret;
-	}
+	ops = klp_find_ops(func->old_addr);
+	if (!ops) {
+		ops = kzalloc(sizeof(*ops), GFP_KERNEL);
+		if (!ops)
+			return -ENOMEM;
+
+		ops->fops.func = klp_ftrace_handler;
+		ops->fops.flags = FTRACE_OPS_FL_SAVE_REGS |
+				  FTRACE_OPS_FL_DYNAMIC |
+				  FTRACE_OPS_FL_IPMODIFY;
+
+		list_add(&ops->node, &klp_ops);
+
+		INIT_LIST_HEAD(&ops->func_stack);
+		list_add_rcu(&func->stack_node, &ops->func_stack);
+
+		ret = ftrace_set_filter_ip(&ops->fops, func->old_addr, 0, 0);
+		if (ret) {
+			pr_err("failed to set ftrace filter for function '%s' (%d)\n",
+			       func->old_name, ret);
+			goto err;
+		}
+
+		ret = register_ftrace_function(&ops->fops);
+		if (ret) {
+			pr_err("failed to register ftrace handler for function '%s' (%d)\n",
+			       func->old_name, ret);
+			ftrace_set_filter_ip(&ops->fops, func->old_addr, 1, 0);
+			goto err;
+		}
+
 
-	ret = register_ftrace_function(func->fops);
-	if (ret) {
-		pr_err("failed to register ftrace handler for function '%s' (%d)\n",
-		       func->old_name, ret);
-		ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
 	} else {
-		func->state = KLP_ENABLED;
+		list_add_rcu(&func->stack_node, &ops->func_stack);
 	}
 
+	func->state = KLP_ENABLED;
+
+	return ret;
+
+err:
+	list_del_rcu(&func->stack_node);
+	list_del(&ops->node);
+	kfree(ops);
 	return ret;
 }
 
@@ -582,10 +671,6 @@ static struct kobj_type klp_ktype_patch = {
 
 static void klp_kobj_release_func(struct kobject *kobj)
 {
-	struct klp_func *func;
-
-	func = container_of(kobj, struct klp_func, kobj);
-	kfree(func->fops);
 }
 
 static struct kobj_type klp_ktype_func = {
@@ -642,28 +727,11 @@ static void klp_free_patch(struct klp_patch *patch)
 
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
 {
-	struct ftrace_ops *ops;
-	int ret;
-
-	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
-	if (!ops)
-		return -ENOMEM;
-
-	ops->private = func;
-	ops->func = klp_ftrace_handler;
-	ops->flags = FTRACE_OPS_FL_SAVE_REGS | FTRACE_OPS_FL_DYNAMIC |
-		     FTRACE_OPS_FL_IPMODIFY;
-	func->fops = ops;
+	INIT_LIST_HEAD(&func->stack_node);
 	func->state = KLP_DISABLED;
 
-	ret = kobject_init_and_add(&func->kobj, &klp_ktype_func,
-				   obj->kobj, func->old_name);
-	if (ret) {
-		kfree(func->fops);
-		return ret;
-	}
-
-	return 0;
+	return kobject_init_and_add(&func->kobj, &klp_ktype_func,
+				    obj->kobj, func->old_name);
 }
 
 /* parts of the initialization that is done only when the object is loaded */
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/2] livepatch: enforce patch stacking semantics
  2015-01-20 15:26 ` [PATCH v2 1/2] livepatch: enforce patch stacking semantics Josh Poimboeuf
@ 2015-01-20 15:33   ` Jiri Slaby
  0 siblings, 0 replies; 7+ messages in thread
From: Jiri Slaby @ 2015-01-20 15:33 UTC (permalink / raw)
  To: Josh Poimboeuf, Seth Jennings, Jiri Kosina, Vojtech Pavlik
  Cc: live-patching, linux-kernel

On 01/20/2015, 04:26 PM, Josh Poimboeuf wrote:
> Only allow the topmost patch on the stack to be enabled or disabled, so
> that patches can't be removed or added in an arbitrary order.
> 
> Suggested-by: Jiri Kosina <jkosina@suse.cz>
> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>

Reviewed-by: Jiri Slaby <jslaby@suse.cz>

> ---
>  kernel/livepatch/core.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index 3d9c00b..2401e7f 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -379,6 +379,11 @@ static int __klp_disable_patch(struct klp_patch *patch)
>  	struct klp_object *obj;
>  	int ret;
>  
> +	/* enforce stacking: only the last enabled patch can be disabled */
> +	if (!list_is_last(&patch->list, &klp_patches) &&
> +	    list_next_entry(patch, list)->state == KLP_ENABLED)
> +		return -EBUSY;
> +
>  	pr_notice("disabling patch '%s'\n", patch->mod->name);
>  
>  	for (obj = patch->objs; obj->funcs; obj++) {
> @@ -435,6 +440,11 @@ static int __klp_enable_patch(struct klp_patch *patch)
>  	if (WARN_ON(patch->state != KLP_DISABLED))
>  		return -EINVAL;
>  
> +	/* enforce stacking: only the first disabled patch can be enabled */
> +	if (patch->list.prev != &klp_patches &&
> +	    list_prev_entry(patch, list)->state == KLP_DISABLED)
> +		return -EBUSY;
> +
>  	pr_notice_once("tainting kernel with TAINT_LIVEPATCH\n");
>  	add_taint(TAINT_LIVEPATCH, LOCKDEP_STILL_OK);
>  
> 

thanks,
-- 
js
suse labs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/2] livepatch: support for repatching a function
  2015-01-20 15:26 ` [PATCH v2 2/2] livepatch: support for repatching a function Josh Poimboeuf
@ 2015-01-20 15:37   ` Jiri Slaby
  2015-01-20 15:47     ` Josh Poimboeuf
  0 siblings, 1 reply; 7+ messages in thread
From: Jiri Slaby @ 2015-01-20 15:37 UTC (permalink / raw)
  To: Josh Poimboeuf, Seth Jennings, Jiri Kosina, Vojtech Pavlik
  Cc: live-patching, linux-kernel

On 01/20/2015, 04:26 PM, Josh Poimboeuf wrote:
> Add support for patching a function multiple times.  If multiple patches
> affect a function, the function in the most recently enabled patch
> "wins".  This enables a cumulative patch upgrade path, where each patch
> is a superset of previous patches.
> 
> This requires restructuring the data a little bit.  With the current
> design, where each klp_func struct has its own ftrace_ops, we'd have to
> unregister the old ops and then register the new ops, because
> FTRACE_OPS_FL_IPMODIFY prevents us from having two ops registered for
> the same function at the same time.  That would leave a regression
> window where the function isn't patched at all (not good for a patch
> upgrade path).
> 
> This patch replaces the per-klp_func ftrace_ops with a global klp_ops
> list, with one ftrace_ops per original function.  A single ftrace_ops is
> shared between all klp_funcs which have the same old_addr.  This allows
> the switch between function versions to happen instantaneously by
> updating the klp_ops struct's func_stack list.  The winner is the
> klp_func at the top of the func_stack (front of the list).
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>

Reviewed-by: Jiri Slaby <jslaby@suse.cz>

But...

> @@ -267,16 +303,28 @@ static int klp_write_object_relocations(struct module *pmod,
>  
>  static void notrace klp_ftrace_handler(unsigned long ip,
>  				       unsigned long parent_ip,
> -				       struct ftrace_ops *ops,
> +				       struct ftrace_ops *fops,
>  				       struct pt_regs *regs)
>  {
> -	struct klp_func *func = ops->private;
> +	struct klp_ops *ops;
> +	struct klp_func *func;
> +
> +	ops = container_of(fops, struct klp_ops, fops);
> +
> +	rcu_read_lock();
> +	func = list_first_or_null_rcu(&ops->func_stack, struct klp_func,
> +				      stack_node);
> +	rcu_read_unlock();
> +
> +	if (WARN_ON(!func))
> +		return;

If it ever happens, the warn will drown the machine in the output splash.

WARN_ON_RATELIMIT?

thanks,
-- 
js
suse labs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/2] livepatch: support for repatching a function
  2015-01-20 15:37   ` Jiri Slaby
@ 2015-01-20 15:47     ` Josh Poimboeuf
  2015-01-20 19:11       ` Jiri Kosina
  0 siblings, 1 reply; 7+ messages in thread
From: Josh Poimboeuf @ 2015-01-20 15:47 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Seth Jennings, Jiri Kosina, Vojtech Pavlik, live-patching,
	linux-kernel

On Tue, Jan 20, 2015 at 04:37:59PM +0100, Jiri Slaby wrote:
> On 01/20/2015, 04:26 PM, Josh Poimboeuf wrote:
> > Add support for patching a function multiple times.  If multiple patches
> > affect a function, the function in the most recently enabled patch
> > "wins".  This enables a cumulative patch upgrade path, where each patch
> > is a superset of previous patches.
> > 
> > This requires restructuring the data a little bit.  With the current
> > design, where each klp_func struct has its own ftrace_ops, we'd have to
> > unregister the old ops and then register the new ops, because
> > FTRACE_OPS_FL_IPMODIFY prevents us from having two ops registered for
> > the same function at the same time.  That would leave a regression
> > window where the function isn't patched at all (not good for a patch
> > upgrade path).
> > 
> > This patch replaces the per-klp_func ftrace_ops with a global klp_ops
> > list, with one ftrace_ops per original function.  A single ftrace_ops is
> > shared between all klp_funcs which have the same old_addr.  This allows
> > the switch between function versions to happen instantaneously by
> > updating the klp_ops struct's func_stack list.  The winner is the
> > klp_func at the top of the func_stack (front of the list).
> > 
> > Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
> 
> Reviewed-by: Jiri Slaby <jslaby@suse.cz>

Thanks for the review!

> But...
> 
> > @@ -267,16 +303,28 @@ static int klp_write_object_relocations(struct module *pmod,
> >  
> >  static void notrace klp_ftrace_handler(unsigned long ip,
> >  				       unsigned long parent_ip,
> > -				       struct ftrace_ops *ops,
> > +				       struct ftrace_ops *fops,
> >  				       struct pt_regs *regs)
> >  {
> > -	struct klp_func *func = ops->private;
> > +	struct klp_ops *ops;
> > +	struct klp_func *func;
> > +
> > +	ops = container_of(fops, struct klp_ops, fops);
> > +
> > +	rcu_read_lock();
> > +	func = list_first_or_null_rcu(&ops->func_stack, struct klp_func,
> > +				      stack_node);
> > +	rcu_read_unlock();
> > +
> > +	if (WARN_ON(!func))
> > +		return;
> 
> If it ever happens, the warn will drown the machine in the output splash.

Yeah, maybe, depending on the nature of the bug.

> WARN_ON_RATELIMIT?

Since this warning should never happen unless there's a code bug, I
think WARN_ON_ONCE should be sufficient?

-- 
Josh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/2] livepatch: support for repatching a function
  2015-01-20 15:47     ` Josh Poimboeuf
@ 2015-01-20 19:11       ` Jiri Kosina
  0 siblings, 0 replies; 7+ messages in thread
From: Jiri Kosina @ 2015-01-20 19:11 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Jiri Slaby, Seth Jennings, Vojtech Pavlik, live-patching,
	linux-kernel

On Tue, 20 Jan 2015, Josh Poimboeuf wrote:

> Yeah, maybe, depending on the nature of the bug.
> 
> > WARN_ON_RATELIMIT?
> 
> Since this warning should never happen unless there's a code bug, I
> think WARN_ON_ONCE should be sufficient?

Yeah. The stacktrace is likely not to point to the source of the problem 
anyway, so spamming dmesg with it is rather pointless.

I've changed WARN_ON() in ftrace handler to WARN_ON_ONCE() and applied. 
Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-01-20 19:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-20 15:26 [PATCH v2 0/2] support for stacked patches Josh Poimboeuf
2015-01-20 15:26 ` [PATCH v2 1/2] livepatch: enforce patch stacking semantics Josh Poimboeuf
2015-01-20 15:33   ` Jiri Slaby
2015-01-20 15:26 ` [PATCH v2 2/2] livepatch: support for repatching a function Josh Poimboeuf
2015-01-20 15:37   ` Jiri Slaby
2015-01-20 15:47     ` Josh Poimboeuf
2015-01-20 19:11       ` Jiri Kosina

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).