* [patch 10/10] Text Edit Lock - x86_64 standardize debug rodata
2007-08-27 15:56 [patch 00/10] Text Edit Lock Mathieu Desnoyers
@ 2007-08-27 15:56 ` Mathieu Desnoyers
0 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-08-27 15:56 UTC (permalink / raw)
To: akpm, linux-kernel; +Cc: Mathieu Desnoyers, Andi Kleen, pageexec
[-- Attachment #1: text-edit-lock-x86_64-standardize-debug-rodata.patch --]
[-- Type: text/plain, Size: 1807 bytes --]
Standardize DEBUG_RODATA, removing special cases for hotplug and kprobes.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: pageexec@freemail.hu
---
arch/x86_64/mm/init.c | 23 +++++------------------
include/asm-x86_64/cacheflush.h | 22 ++++++++++++++++++++++
2 files changed, 27 insertions(+), 18 deletions(-)
Index: linux-2.6-lttng/arch/x86_64/mm/init.c
===================================================================
--- linux-2.6-lttng.orig/arch/x86_64/mm/init.c 2007-08-18 10:02:58.000000000 -0400
+++ linux-2.6-lttng/arch/x86_64/mm/init.c 2007-08-18 10:04:14.000000000 -0400
@@ -598,25 +598,11 @@ void free_initmem(void)
void mark_rodata_ro(void)
{
- unsigned long start = (unsigned long)_stext, end;
+ unsigned long start = PFN_ALIGN(_stext);
+ unsigned long end = PFN_ALIGN(__end_rodata);
-#ifdef CONFIG_HOTPLUG_CPU
- /* It must still be possible to apply SMP alternatives. */
- if (num_possible_cpus() > 1)
- start = (unsigned long)_etext;
-#endif
-
-#ifdef CONFIG_KPROBES
- start = (unsigned long)__start_rodata;
-#endif
-
- end = (unsigned long)__end_rodata;
- start = (start + PAGE_SIZE - 1) & PAGE_MASK;
- end &= PAGE_MASK;
- if (end <= start)
- return;
-
- change_page_attr_addr(start, (end - start) >> PAGE_SHIFT, PAGE_KERNEL_RO);
+ change_page_attr_addr(start, (end - start) >> PAGE_SHIFT,
+ PAGE_KERNEL_RO);
printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n",
(end - start) >> 10);
@@ -629,6 +615,7 @@ void mark_rodata_ro(void)
*/
global_flush_tlb();
}
+
#endif
#ifdef CONFIG_BLK_DEV_INITRD
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1
@ 2007-09-06 20:01 Mathieu Desnoyers
2007-09-06 20:01 ` [patch 01/10] Kprobes - use a mutex to protect the instruction pages list Mathieu Desnoyers
` (9 more replies)
0 siblings, 10 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel
Hi Andrew,
Here is the updated text edit lock for 2.6.23-rc4-mm1. It should be clean and
ready to apply.
Mathieu
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 01/10] Kprobes - use a mutex to protect the instruction pages list.
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 02/10] Kprobes - do not use kprobes mutex in arch code Mathieu Desnoyers
` (8 subsequent siblings)
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli, hch, prasanna,
anil.s.keshavamurthy, davem
[-- Attachment #1: kprobes-use-mutex-for-insn-pages.patch --]
[-- Type: text/plain, Size: 3625 bytes --]
Protect the instruction pages list by a specific insn pages mutex, called in
get_insn_slot() and free_insn_slot(). It makes sure that architectures that does
not need to call arch_remove_kprobe() does not take an unneeded kprobes mutex.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
CC: hch@infradead.org
CC: prasanna@in.ibm.com
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
kernel/kprobes.c | 27 +++++++++++++++++++++------
1 file changed, 21 insertions(+), 6 deletions(-)
Index: linux-2.6-lttng/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/kernel/kprobes.c 2007-08-27 11:48:56.000000000 -0400
+++ linux-2.6-lttng/kernel/kprobes.c 2007-08-27 11:48:58.000000000 -0400
@@ -95,6 +95,10 @@ enum kprobe_slot_state {
SLOT_USED = 2,
};
+/*
+ * Protects the kprobe_insn_pages list. Can nest into kprobe_mutex.
+ */
+static DEFINE_MUTEX(kprobe_insn_mutex);
static struct hlist_head kprobe_insn_pages;
static int kprobe_garbage_slots;
static int collect_garbage_slots(void);
@@ -131,7 +135,9 @@ kprobe_opcode_t __kprobes *get_insn_slot
{
struct kprobe_insn_page *kip;
struct hlist_node *pos;
+ kprobe_opcode_t *ret;
+ mutex_lock(&kprobe_insn_mutex);
retry:
hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) {
if (kip->nused < INSNS_PER_PAGE) {
@@ -140,7 +146,8 @@ kprobe_opcode_t __kprobes *get_insn_slot
if (kip->slot_used[i] == SLOT_CLEAN) {
kip->slot_used[i] = SLOT_USED;
kip->nused++;
- return kip->insns + (i * MAX_INSN_SIZE);
+ ret = kip->insns + (i * MAX_INSN_SIZE);
+ goto end;
}
}
/* Surprise! No unused slots. Fix kip->nused. */
@@ -154,8 +161,10 @@ kprobe_opcode_t __kprobes *get_insn_slot
}
/* All out of space. Need to allocate a new page. Use slot 0. */
kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL);
- if (!kip)
- return NULL;
+ if (!kip) {
+ ret = NULL;
+ goto end;
+ }
/*
* Use module_alloc so this page is within +/- 2GB of where the
@@ -165,7 +174,8 @@ kprobe_opcode_t __kprobes *get_insn_slot
kip->insns = module_alloc(PAGE_SIZE);
if (!kip->insns) {
kfree(kip);
- return NULL;
+ ret = NULL;
+ goto end;
}
INIT_HLIST_NODE(&kip->hlist);
hlist_add_head(&kip->hlist, &kprobe_insn_pages);
@@ -173,7 +183,10 @@ kprobe_opcode_t __kprobes *get_insn_slot
kip->slot_used[0] = SLOT_USED;
kip->nused = 1;
kip->ngarbage = 0;
- return kip->insns;
+ ret = kip->insns;
+end:
+ mutex_unlock(&kprobe_insn_mutex);
+ return ret;
}
/* Return 1 if all garbages are collected, otherwise 0. */
@@ -207,7 +220,7 @@ static int __kprobes collect_garbage_slo
struct kprobe_insn_page *kip;
struct hlist_node *pos, *next;
- /* Ensure no-one is preepmted on the garbages */
+ /* Ensure no-one is preempted on the garbages */
if (check_safety() != 0)
return -EAGAIN;
@@ -231,6 +244,7 @@ void __kprobes free_insn_slot(kprobe_opc
struct kprobe_insn_page *kip;
struct hlist_node *pos;
+ mutex_lock(&kprobe_insn_mutex);
hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) {
if (kip->insns <= slot &&
slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) {
@@ -247,6 +261,7 @@ void __kprobes free_insn_slot(kprobe_opc
if (dirty && ++kprobe_garbage_slots > INSNS_PER_PAGE)
collect_garbage_slots();
+ mutex_unlock(&kprobe_insn_mutex);
}
#endif
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 02/10] Kprobes - do not use kprobes mutex in arch code
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
2007-09-06 20:01 ` [patch 01/10] Kprobes - use a mutex to protect the instruction pages list Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 03/10] Kprobes - declare kprobe_mutex static Mathieu Desnoyers
` (7 subsequent siblings)
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli, prasanna,
anil.s.keshavamurthy, davem
[-- Attachment #1: kprobes-dont-use-kprobes-mutex-in-arch-code.patch --]
[-- Type: text/plain, Size: 5136 bytes --]
Remove the kprobes mutex from kprobes.h, since it does not belong there. Also
remove all use of this mutex in the architecture specific code, replacing it by
a proper mutex lock/unlock in the architecture agnostic code.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
CC: prasanna@in.ibm.com
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
arch/i386/kernel/kprobes.c | 2 --
arch/ia64/kernel/kprobes.c | 2 --
arch/powerpc/kernel/kprobes.c | 2 --
arch/s390/kernel/kprobes.c | 2 --
arch/x86_64/kernel/kprobes.c | 2 --
include/linux/kprobes.h | 2 --
kernel/kprobes.c | 2 ++
7 files changed, 2 insertions(+), 12 deletions(-)
Index: linux-2.6-lttng/include/linux/kprobes.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/kprobes.h 2007-09-04 12:10:31.000000000 -0400
+++ linux-2.6-lttng/include/linux/kprobes.h 2007-09-04 12:10:34.000000000 -0400
@@ -35,7 +35,6 @@
#include <linux/percpu.h>
#include <linux/spinlock.h>
#include <linux/rcupdate.h>
-#include <linux/mutex.h>
#ifdef CONFIG_KPROBES
#include <asm/kprobes.h>
@@ -183,7 +182,6 @@ static inline void kretprobe_assert(stru
}
extern spinlock_t kretprobe_lock;
-extern struct mutex kprobe_mutex;
extern int arch_prepare_kprobe(struct kprobe *p);
extern void arch_arm_kprobe(struct kprobe *p);
extern void arch_disarm_kprobe(struct kprobe *p);
Index: linux-2.6-lttng/arch/i386/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/i386/kernel/kprobes.c 2007-09-04 12:10:31.000000000 -0400
+++ linux-2.6-lttng/arch/i386/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
@@ -187,9 +187,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, (p->ainsn.boostable == 1));
- mutex_unlock(&kprobe_mutex);
}
static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
Index: linux-2.6-lttng/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/kernel/kprobes.c 2007-09-04 12:10:33.000000000 -0400
+++ linux-2.6-lttng/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
@@ -644,7 +644,9 @@ valid_p:
list_del_rcu(&p->list);
kfree(old_p);
}
+ mutex_lock(&kprobe_mutex);
arch_remove_kprobe(p);
+ mutex_unlock(&kprobe_mutex);
} else {
mutex_lock(&kprobe_mutex);
if (p->break_handler)
Index: linux-2.6-lttng/arch/ia64/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/ia64/kernel/kprobes.c 2007-09-04 12:10:31.000000000 -0400
+++ linux-2.6-lttng/arch/ia64/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
@@ -567,9 +567,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, 0);
- mutex_unlock(&kprobe_mutex);
}
/*
* We are resuming execution after a single step fault, so the pt_regs
Index: linux-2.6-lttng/arch/powerpc/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/powerpc/kernel/kprobes.c 2007-09-04 12:10:31.000000000 -0400
+++ linux-2.6-lttng/arch/powerpc/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
@@ -88,9 +88,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, 0);
- mutex_unlock(&kprobe_mutex);
}
static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
Index: linux-2.6-lttng/arch/s390/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/s390/kernel/kprobes.c 2007-09-04 12:10:31.000000000 -0400
+++ linux-2.6-lttng/arch/s390/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
@@ -220,9 +220,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, 0);
- mutex_unlock(&kprobe_mutex);
}
static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
Index: linux-2.6-lttng/arch/x86_64/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/x86_64/kernel/kprobes.c 2007-09-04 12:10:31.000000000 -0400
+++ linux-2.6-lttng/arch/x86_64/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
@@ -226,9 +226,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, 0);
- mutex_unlock(&kprobe_mutex);
}
static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 03/10] Kprobes - declare kprobe_mutex static
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
2007-09-06 20:01 ` [patch 01/10] Kprobes - use a mutex to protect the instruction pages list Mathieu Desnoyers
2007-09-06 20:01 ` [patch 02/10] Kprobes - do not use kprobes mutex in arch code Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 04/10] Text Edit Lock - Architecture Independent Code Mathieu Desnoyers
` (6 subsequent siblings)
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli, hch, prasanna,
anil.s.keshavamurthy, davem
[-- Attachment #1: kprobes-declare-kprobes-mutex-static.patch --]
[-- Type: text/plain, Size: 1229 bytes --]
Since it will not be used by other kernel objects, it makes sense to declare it
static.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
CC: hch@infradead.org
CC: prasanna@in.ibm.com
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
kernel/kprobes.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6-lttng/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/kernel/kprobes.c 2007-08-19 09:09:15.000000000 -0400
+++ linux-2.6-lttng/kernel/kprobes.c 2007-08-19 17:18:07.000000000 -0400
@@ -68,7 +68,7 @@ static struct hlist_head kretprobe_inst_
/* NOTE: change this value only with kprobe_mutex held */
static bool kprobe_enabled;
-DEFINE_MUTEX(kprobe_mutex); /* Protects kprobe_table */
+static DEFINE_MUTEX(kprobe_mutex); /* Protects kprobe_table */
DEFINE_SPINLOCK(kretprobe_lock); /* Protects kretprobe_inst_table */
static DEFINE_PER_CPU(struct kprobe *, kprobe_instance) = NULL;
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 04/10] Text Edit Lock - Architecture Independent Code
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
` (2 preceding siblings ...)
2007-09-06 20:01 ` [patch 03/10] Kprobes - declare kprobe_mutex static Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64 Mathieu Desnoyers
` (5 subsequent siblings)
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel; +Cc: Mathieu Desnoyers, Andi Kleen
[-- Attachment #1: text-edit-lock-architecture-independent-code.patch --]
[-- Type: text/plain, Size: 3161 bytes --]
This is an architecture independant synchronization around kernel text
modifications through use of a global mutex.
A mutex has been chosen so that kprobes, the main user of this, can sleep during
memory allocation between the memory read of the instructions it must replace
and the memory write of the breakpoint.
Other user of this interface: immediate values.
Paravirt and alternatives are always done when SMP is inactive, so there is no
need to use locks.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
---
include/linux/memory.h | 7 +++++++
mm/memory.c | 34 ++++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+)
Index: linux-2.6-lttng/include/linux/memory.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/memory.h 2007-09-04 11:09:26.000000000 -0400
+++ linux-2.6-lttng/include/linux/memory.h 2007-09-04 12:10:45.000000000 -0400
@@ -86,4 +86,11 @@ extern int remove_memory_block(unsigned
register_memory_notifier(&fn##_mem_nb); \
}
+/*
+ * Take and release the kernel text modification lock, used for code patching.
+ * Users of this lock can sleep.
+ */
+extern void kernel_text_lock(void);
+extern void kernel_text_unlock(void);
+
#endif /* _LINUX_MEMORY_H_ */
Index: linux-2.6-lttng/mm/memory.c
===================================================================
--- linux-2.6-lttng.orig/mm/memory.c 2007-09-04 11:53:59.000000000 -0400
+++ linux-2.6-lttng/mm/memory.c 2007-09-04 12:11:14.000000000 -0400
@@ -51,6 +51,8 @@
#include <linux/init.h>
#include <linux/writeback.h>
#include <linux/memcontrol.h>
+#include <linux/kprobes.h>
+#include <linux/mutex.h>
#include <asm/pgalloc.h>
#include <asm/uaccess.h>
@@ -85,6 +87,12 @@ EXPORT_SYMBOL(high_memory);
int randomize_va_space __read_mostly = 1;
+/*
+ * mutex protecting text section modification (dynamic code patching).
+ * some users need to sleep (allocating memory...) while they hold this lock.
+ */
+static DEFINE_MUTEX(text_mutex);
+
static int __init disable_randmaps(char *s)
{
randomize_va_space = 0;
@@ -2787,3 +2795,29 @@ int access_process_vm(struct task_struct
return buf - old_buf;
}
EXPORT_SYMBOL_GPL(access_process_vm);
+
+/**
+ * kernel_text_lock - Take the kernel text modification lock
+ *
+ * Insures mutual write exclusion of kernel and modules text live text
+ * modification. Should be used for code patching.
+ * Users of this lock can sleep.
+ */
+void __kprobes kernel_text_lock(void)
+{
+ mutex_lock(&text_mutex);
+}
+EXPORT_SYMBOL_GPL(kernel_text_lock);
+
+/**
+ * kernel_text_unlock - Release the kernel text modification lock
+ *
+ * Insures mutual write exclusion of kernel and modules text live text
+ * modification. Should be used for code patching.
+ * Users of this lock can sleep.
+ */
+void __kprobes kernel_text_unlock(void)
+{
+ mutex_unlock(&text_mutex);
+}
+EXPORT_SYMBOL_GPL(kernel_text_unlock);
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
` (3 preceding siblings ...)
2007-09-06 20:01 ` [patch 04/10] Text Edit Lock - Architecture Independent Code Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-07 6:59 ` Andi Kleen
2007-09-07 8:43 ` Ananth N Mavinakayanahalli
2007-09-06 20:01 ` [patch 06/10] Text Edit Lock - kprobes architecture independent support Mathieu Desnoyers
` (4 subsequent siblings)
9 siblings, 2 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel; +Cc: Mathieu Desnoyers, Andi Kleen, pageexec
[-- Attachment #1: text-edit-lock-alternative-i386-and-x86_64.patch --]
[-- Type: text/plain, Size: 10012 bytes --]
Fix a memcpy that should be a text_poke (in apply_alternatives).
Use kernel_wp_save/kernel_wp_restore in text_poke to support DEBUG_RODATA
correctly and so the CPU HOTPLUG special case can be removed.
clflush all the cachelines touched by text_poke.
Add text_set(), which is basically a memset-like text_poke.
Add text_poke_early and text_set_early, for alternatives and paravirt boot-time
and module load time patching.
Changelog:
Fix text_set and text_poke alignment check (mixed up bitwise and and or)
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: pageexec@freemail.hu
---
arch/i386/kernel/alternative.c | 117 ++++++++++++++++++++++++++++++++-------
include/asm-i386/alternative.h | 40 +++++++++++++
include/asm-x86_64/alternative.h | 38 ++++++++++++
3 files changed, 172 insertions(+), 23 deletions(-)
Index: linux-2.6-lttng/arch/i386/kernel/alternative.c
===================================================================
--- linux-2.6-lttng.orig/arch/i386/kernel/alternative.c 2007-09-06 14:32:11.000000000 -0400
+++ linux-2.6-lttng/arch/i386/kernel/alternative.c 2007-09-06 14:59:19.000000000 -0400
@@ -16,6 +16,99 @@
#ifdef CONFIG_HOTPLUG_CPU
static int smp_alt_once;
+/*
+ * Warning:
+ * When you use this code to patch more than one byte of an instruction
+ * you need to make sure that other CPUs cannot execute this code in parallel.
+ * Also no thread must be currently preempted in the middle of these
+ * instructions. And on the local CPU you need to be protected again NMI or MCE
+ * handlers seeing an inconsistent instruction while you patch.
+ * Warning: read_cr0 is modified by paravirt, this is why we have _early
+ * versions. They are not in the __init section because they can be used at
+ * module load time.
+ */
+static inline void text_sync(void *addr, size_t len)
+{
+ void *faddr;
+
+ sync_core();
+ /* Not strictly needed, but can speed CPU recovery up. */
+ if (cpu_has_clflush)
+ for (faddr = addr; faddr < addr + len;
+ faddr += boot_cpu_data.x86_clflush_size)
+ asm("clflush (%0) " :: "r" (faddr) : "memory");
+}
+
+void * text_poke_early(void *addr, const void *opcode,
+ size_t len)
+{
+ memcpy(addr, opcode, len);
+ text_sync(addr, len);
+ return addr;
+}
+
+void * text_set_early(void *addr, int c, size_t len)
+{
+ memset(addr, c, len);
+ text_sync(addr, len);
+ return addr;
+}
+
+/*
+ * Only atomic text poke/set should be allowed when not doing early patching.
+ * It means the size must be writable atomically and the address must be aligned
+ * in a way that permits an atomic write.
+ */
+void * __kprobes text_poke(void *addr, const void *opcode, size_t len)
+{
+ unsigned long cr0;
+ int unaligned;
+
+ if (len > sizeof(long)) {
+ printk(KERN_ERR "text_poke of len %zu too big (max %lu)\n",
+ len, sizeof(long));
+ BUG_ON(1);
+ }
+ unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
+ - ((long)addr & ~(sizeof(long) - 1));
+ if (unlikely(unaligned)) {
+ printk(KERN_ERR "text_poke of at addr %p of len %zu is "
+ "unaligned (%d)\n",
+ addr, len, unaligned);
+ BUG_ON(1);
+ }
+ kernel_wp_save(cr0);
+ memcpy(addr, opcode, len);
+ kernel_wp_restore(cr0);
+ text_sync(addr, len);
+ return addr;
+}
+
+void * __kprobes text_set(void *addr, int c, size_t len)
+{
+ unsigned long cr0;
+ int unaligned;
+
+ if (len > sizeof(long)) {
+ printk(KERN_ERR "text_set of len %zu too big (max %lu)\n",
+ len, sizeof(long));
+ BUG_ON(1);
+ }
+ unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
+ - ((long)addr & ~(sizeof(long) - 1));
+ if (unlikely(unaligned)) {
+ printk(KERN_ERR "text_set of at addr %p of len %zu is "
+ "unaligned (%d)\n",
+ addr, len, unaligned);
+ BUG_ON(1);
+ }
+ kernel_wp_save(cr0);
+ memset(addr, c, len);
+ kernel_wp_restore(cr0);
+ text_sync(addr, len);
+ return addr;
+}
+
static int __init bootonly(char *str)
{
smp_alt_once = 1;
@@ -197,7 +290,7 @@ void apply_alternatives(struct alt_instr
memcpy(insnbuf, a->replacement, a->replacementlen);
add_nops(insnbuf + a->replacementlen,
a->instrlen - a->replacementlen);
- text_poke(instr, insnbuf, a->instrlen);
+ text_poke_early(instr, insnbuf, a->instrlen);
}
}
@@ -212,7 +305,7 @@ static void alternatives_smp_lock(u8 **s
continue;
if (*ptr > text_end)
continue;
- text_poke(*ptr, ((unsigned char []){0xf0}), 1); /* add lock prefix */
+ text_set(*ptr, 0xf0, 1); /* add lock prefix */
};
}
@@ -372,7 +465,7 @@ void apply_paravirt(struct paravirt_patc
/* Pad the rest with nops */
add_nops(insnbuf + used, p->len - used);
- text_poke(p->instr, insnbuf, p->len);
+ text_poke_early(p->instr, insnbuf, p->len);
}
}
extern struct paravirt_patch_site __start_parainstructions[],
@@ -429,21 +522,3 @@ void __init alternative_instructions(voi
restart_mce();
#endif
}
-
-/*
- * Warning:
- * When you use this code to patch more than one byte of an instruction
- * you need to make sure that other CPUs cannot execute this code in parallel.
- * Also no thread must be currently preempted in the middle of these instructions.
- * And on the local CPU you need to be protected again NMI or MCE handlers
- * seeing an inconsistent instruction while you patch.
- */
-void __kprobes text_poke(void *addr, unsigned char *opcode, int len)
-{
- memcpy(addr, opcode, len);
- sync_core();
- /* Not strictly needed, but can speed CPU recovery up. Ignore cross cacheline
- case. */
- if (cpu_has_clflush)
- asm("clflush (%0) " :: "r" (addr) : "memory");
-}
Index: linux-2.6-lttng/include/asm-i386/alternative.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-i386/alternative.h 2007-09-06 14:32:11.000000000 -0400
+++ linux-2.6-lttng/include/asm-i386/alternative.h 2007-09-06 14:58:12.000000000 -0400
@@ -4,6 +4,7 @@
#include <asm/types.h>
#include <linux/stddef.h>
#include <linux/types.h>
+#include <asm/processor-flags.h>
struct alt_instr {
u8 *instr; /* original instruction */
@@ -149,6 +150,43 @@ apply_paravirt(struct paravirt_patch_sit
#define __parainstructions_end NULL
#endif
-extern void text_poke(void *addr, unsigned char *opcode, int len);
+/*
+ * Clear and restore the kernel write-protection flag on the local CPU.
+ * Allows the kernel to edit read-only pages.
+ * Side-effect: any interrupt handler running between save and restore will have
+ * the ability to write to read-only pages.
+ *
+ * Warning:
+ * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
+ * no thread can be preempted in the instructions being modified (no iret to an
+ * invalid instruction possible) or if the instructions are changed from a
+ * consistent state to another consistent state atomically.
+ * More care must be taken when modifying code in the SMP case because of
+ * Intel's errata.
+ * On the local CPU you need to be protected again NMI or MCE handlers seeing an
+ * inconsistent instruction while you patch.
+ */
+
+extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern void *text_set(void *addr, int c, size_t len);
+extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+extern void *text_set_early(void *addr, int c, size_t len);
+
+#define kernel_wp_save(cr0) \
+ do { \
+ typecheck(unsigned long, cr0); \
+ preempt_disable(); \
+ cr0 = read_cr0(); \
+ if (cpu_data[smp_processor_id()].wp_works_ok) \
+ write_cr0(cr0 & ~X86_CR0_WP); \
+ } while (0)
+
+#define kernel_wp_restore(cr0) \
+ do { \
+ typecheck(unsigned long, cr0); \
+ if (cpu_data[smp_processor_id()].wp_works_ok) \
+ write_cr0(cr0); \
+ preempt_enable(); \
+ } while (0)
#endif /* _I386_ALTERNATIVE_H */
Index: linux-2.6-lttng/include/asm-x86_64/alternative.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-x86_64/alternative.h 2007-09-06 14:32:11.000000000 -0400
+++ linux-2.6-lttng/include/asm-x86_64/alternative.h 2007-09-06 14:58:12.000000000 -0400
@@ -5,6 +5,7 @@
#include <linux/types.h>
#include <linux/stddef.h>
+#include <asm/processor-flags.h>
/*
* Alternative inline assembly for SMP.
@@ -154,6 +155,41 @@ apply_paravirt(struct paravirt_patch *st
#define __parainstructions_end NULL
#endif
-extern void text_poke(void *addr, unsigned char *opcode, int len);
+/*
+ * Clear and restore the kernel write-protection flag on the local CPU.
+ * Allows the kernel to edit read-only pages.
+ * Side-effect: any interrupt handler running between save and restore will have
+ * the ability to write to read-only pages.
+ *
+ * Warning:
+ * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
+ * no thread can be preempted in the instructions being modified (no iret to an
+ * invalid instruction possible) or if the instructions are changed from a
+ * consistent state to another consistent state atomically.
+ * More care must be taken when modifying code in the SMP case because of
+ * Intel's errata.
+ * On the local CPU you need to be protected again NMI or MCE handlers seeing an
+ * inconsistent instruction while you patch.
+ */
+
+extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern void *text_set(void *addr, int c, size_t len);
+extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+extern void *text_set_early(void *addr, int c, size_t len);
+
+#define kernel_wp_save(cr0) \
+ do { \
+ typecheck(unsigned long, cr0); \
+ preempt_disable(); \
+ cr0 = read_cr0(); \
+ write_cr0(cr0 & ~X86_CR0_WP); \
+ } while (0)
+
+#define kernel_wp_restore(cr0) \
+ do { \
+ typecheck(unsigned long, cr0); \
+ write_cr0(cr0); \
+ preempt_enable(); \
+ } while (0)
#endif /* _X86_64_ALTERNATIVE_H */
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 06/10] Text Edit Lock - kprobes architecture independent support
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
` (4 preceding siblings ...)
2007-09-06 20:01 ` [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64 Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-07 10:28 ` Ananth N Mavinakayanahalli
2007-09-06 20:01 ` [patch 07/10] Text Edit Lock - kprobes i386 Mathieu Desnoyers
` (3 subsequent siblings)
9 siblings, 1 reply; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel
Cc: Mathieu Desnoyers, prasanna, ananth, anil.s.keshavamurthy, davem
[-- Attachment #1: text-edit-lock-kprobes-architecture-independent.patch --]
[-- Type: text/plain, Size: 3099 bytes --]
Use the mutual exclusion provided by the text edit lock in the kprobes code. It
allows coherent manipulation of the kernel code by other subsystems.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: prasanna@in.ibm.com
CC: ananth@in.ibm.com
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
kernel/kprobes.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
Index: linux-2.6-lttng/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/kernel/kprobes.c 2007-09-04 12:10:37.000000000 -0400
+++ linux-2.6-lttng/kernel/kprobes.c 2007-09-04 12:11:22.000000000 -0400
@@ -43,6 +43,7 @@
#include <linux/seq_file.h>
#include <linux/debugfs.h>
#include <linux/kdebug.h>
+#include <linux/memory.h>
#include <asm-generic/sections.h>
#include <asm/cacheflush.h>
@@ -568,9 +569,10 @@ static int __kprobes __register_kprobe(s
goto out;
}
+ kernel_text_lock();
ret = arch_prepare_kprobe(p);
if (ret)
- goto out;
+ goto out_unlock_text;
INIT_HLIST_NODE(&p->hlist);
hlist_add_head_rcu(&p->hlist,
@@ -578,7 +580,8 @@ static int __kprobes __register_kprobe(s
if (kprobe_enabled)
arch_arm_kprobe(p);
-
+out_unlock_text:
+ kernel_text_unlock();
out:
mutex_unlock(&kprobe_mutex);
@@ -621,8 +624,11 @@ valid_p:
* enabled - otherwise, the breakpoint would already have
* been removed. We save on flushing icache.
*/
- if (kprobe_enabled)
+ if (kprobe_enabled) {
+ kernel_text_lock();
arch_disarm_kprobe(p);
+ kernel_text_unlock();
+ }
hlist_del_rcu(&old_p->hlist);
cleanup_p = 1;
} else {
@@ -644,9 +650,7 @@ valid_p:
list_del_rcu(&p->list);
kfree(old_p);
}
- mutex_lock(&kprobe_mutex);
arch_remove_kprobe(p);
- mutex_unlock(&kprobe_mutex);
} else {
mutex_lock(&kprobe_mutex);
if (p->break_handler)
@@ -716,8 +720,9 @@ static int __kprobes pre_handler_kretpro
struct kretprobe_instance, uflist);
ri->rp = rp;
ri->task = current;
+ kernel_text_lock();
arch_prepare_kretprobe(ri, regs);
-
+ kernel_text_unlock();
/* XXX(hch): why is there no hlist_move_head? */
hlist_del(&ri->uflist);
hlist_add_head(&ri->uflist, &ri->rp->used_instances);
@@ -940,8 +945,10 @@ static void __kprobes enable_all_kprobes
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i];
+ kernel_text_lock();
hlist_for_each_entry_rcu(p, node, head, hlist)
arch_arm_kprobe(p);
+ kernel_text_unlock();
}
kprobe_enabled = true;
@@ -969,10 +976,12 @@ static void __kprobes disable_all_kprobe
printk(KERN_INFO "Kprobes globally disabled\n");
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i];
+ kernel_text_lock();
hlist_for_each_entry_rcu(p, node, head, hlist) {
if (!arch_trampoline_kprobe(p))
arch_disarm_kprobe(p);
}
+ kernel_text_unlock();
}
mutex_unlock(&kprobe_mutex);
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 07/10] Text Edit Lock - kprobes i386
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
` (5 preceding siblings ...)
2007-09-06 20:01 ` [patch 06/10] Text Edit Lock - kprobes architecture independent support Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 08/10] Text Edit Lock - kprobes x86_64 Mathieu Desnoyers
` (2 subsequent siblings)
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel
Cc: Mathieu Desnoyers, Andi Kleen, prasanna, ananth,
anil.s.keshavamurthy, davem
[-- Attachment #1: text-edit-lock-kprobes-i386.patch --]
[-- Type: text/plain, Size: 1233 bytes --]
Make kprobes use text_set instead of text_poke.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: prasanna@in.ibm.com
CC: ananth@in.ibm.com
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
arch/i386/kernel/kprobes.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Index: linux-2.6-lttng/arch/i386/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/i386/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
+++ linux-2.6-lttng/arch/i386/kernel/kprobes.c 2007-09-04 12:11:25.000000000 -0400
@@ -177,12 +177,12 @@ int __kprobes arch_prepare_kprobe(struct
void __kprobes arch_arm_kprobe(struct kprobe *p)
{
- text_poke(p->addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}), 1);
+ text_set(p->addr, BREAKPOINT_INSTRUCTION, 1);
}
void __kprobes arch_disarm_kprobe(struct kprobe *p)
{
- text_poke(p->addr, &p->opcode, 1);
+ text_set(p->addr, p->opcode, 1);
}
void __kprobes arch_remove_kprobe(struct kprobe *p)
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 08/10] Text Edit Lock - kprobes x86_64
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
` (6 preceding siblings ...)
2007-09-06 20:01 ` [patch 07/10] Text Edit Lock - kprobes i386 Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 09/10] Text Edit Lock - i386 standardize debug rodata Mathieu Desnoyers
2007-09-06 20:01 ` [patch 10/10] Text Edit Lock - x86_64 " Mathieu Desnoyers
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel
Cc: Mathieu Desnoyers, Andi Kleen, prasanna, ananth,
anil.s.keshavamurthy, davem
[-- Attachment #1: text-edit-lock-kprobes-x86_64.patch --]
[-- Type: text/plain, Size: 1241 bytes --]
Make kprobes use text_set instead of text_poke.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: prasanna@in.ibm.com
CC: ananth@in.ibm.com
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
arch/x86_64/kernel/kprobes.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Index: linux-2.6-lttng/arch/x86_64/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/x86_64/kernel/kprobes.c 2007-09-04 12:10:34.000000000 -0400
+++ linux-2.6-lttng/arch/x86_64/kernel/kprobes.c 2007-09-04 12:11:28.000000000 -0400
@@ -216,12 +216,12 @@ static void __kprobes arch_copy_kprobe(s
void __kprobes arch_arm_kprobe(struct kprobe *p)
{
- text_poke(p->addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}), 1);
+ text_set(p->addr, BREAKPOINT_INSTRUCTION, 1);
}
void __kprobes arch_disarm_kprobe(struct kprobe *p)
{
- text_poke(p->addr, &p->opcode, 1);
+ text_set(p->addr, p->opcode, 1);
}
void __kprobes arch_remove_kprobe(struct kprobe *p)
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 09/10] Text Edit Lock - i386 standardize debug rodata
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
` (7 preceding siblings ...)
2007-09-06 20:01 ` [patch 08/10] Text Edit Lock - kprobes x86_64 Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 10/10] Text Edit Lock - x86_64 " Mathieu Desnoyers
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel; +Cc: Mathieu Desnoyers, Andi Kleen, pageexec
[-- Attachment #1: text-edit-lock-i386-standardize-debug-rodata.patch --]
[-- Type: text/plain, Size: 1722 bytes --]
Standardize DEBUG_RODATA, removing special cases for hotplug and kprobes.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: pageexec@freemail.hu
---
arch/i386/mm/init.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
Index: linux-2.6-lttng/arch/i386/mm/init.c
===================================================================
--- linux-2.6-lttng.orig/arch/i386/mm/init.c 2007-08-19 22:28:00.000000000 -0400
+++ linux-2.6-lttng/arch/i386/mm/init.c 2007-08-19 22:34:39.000000000 -0400
@@ -794,23 +794,15 @@ static int noinline do_test_wp_bit(void)
}
#ifdef CONFIG_DEBUG_RODATA
-
void mark_rodata_ro(void)
{
unsigned long start = PFN_ALIGN(_text);
unsigned long size = PFN_ALIGN(_etext) - start;
-#ifndef CONFIG_KPROBES
-#ifdef CONFIG_HOTPLUG_CPU
- /* It must still be possible to apply SMP alternatives. */
- if (num_possible_cpus() <= 1)
-#endif
- {
- change_page_attr(virt_to_page(start),
- size >> PAGE_SHIFT, PAGE_KERNEL_RX);
- printk("Write protecting the kernel text: %luk\n", size >> 10);
- }
-#endif
+ change_page_attr(virt_to_page(start),
+ size >> PAGE_SHIFT, PAGE_KERNEL_RX);
+ printk("Write protecting the kernel text: %luk\n", size >> 10);
+
start += size;
size = (unsigned long)__end_rodata - start;
change_page_attr(virt_to_page(start),
@@ -826,6 +818,7 @@ void mark_rodata_ro(void)
*/
global_flush_tlb();
}
+
#endif
void free_init_pages(char *what, unsigned long begin, unsigned long end)
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 10/10] Text Edit Lock - x86_64 standardize debug rodata
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
` (8 preceding siblings ...)
2007-09-06 20:01 ` [patch 09/10] Text Edit Lock - i386 standardize debug rodata Mathieu Desnoyers
@ 2007-09-06 20:01 ` Mathieu Desnoyers
9 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-06 20:01 UTC (permalink / raw)
To: akpm, linux-kernel; +Cc: Mathieu Desnoyers, Andi Kleen, pageexec
[-- Attachment #1: text-edit-lock-x86_64-standardize-debug-rodata.patch --]
[-- Type: text/plain, Size: 1807 bytes --]
Standardize DEBUG_RODATA, removing special cases for hotplug and kprobes.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: pageexec@freemail.hu
---
arch/x86_64/mm/init.c | 23 +++++------------------
include/asm-x86_64/cacheflush.h | 22 ++++++++++++++++++++++
2 files changed, 27 insertions(+), 18 deletions(-)
Index: linux-2.6-lttng/arch/x86_64/mm/init.c
===================================================================
--- linux-2.6-lttng.orig/arch/x86_64/mm/init.c 2007-08-18 10:02:58.000000000 -0400
+++ linux-2.6-lttng/arch/x86_64/mm/init.c 2007-08-18 10:04:14.000000000 -0400
@@ -598,25 +598,11 @@ void free_initmem(void)
void mark_rodata_ro(void)
{
- unsigned long start = (unsigned long)_stext, end;
+ unsigned long start = PFN_ALIGN(_stext);
+ unsigned long end = PFN_ALIGN(__end_rodata);
-#ifdef CONFIG_HOTPLUG_CPU
- /* It must still be possible to apply SMP alternatives. */
- if (num_possible_cpus() > 1)
- start = (unsigned long)_etext;
-#endif
-
-#ifdef CONFIG_KPROBES
- start = (unsigned long)__start_rodata;
-#endif
-
- end = (unsigned long)__end_rodata;
- start = (start + PAGE_SIZE - 1) & PAGE_MASK;
- end &= PAGE_MASK;
- if (end <= start)
- return;
-
- change_page_attr_addr(start, (end - start) >> PAGE_SHIFT, PAGE_KERNEL_RO);
+ change_page_attr_addr(start, (end - start) >> PAGE_SHIFT,
+ PAGE_KERNEL_RO);
printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n",
(end - start) >> 10);
@@ -629,6 +615,7 @@ void mark_rodata_ro(void)
*/
global_flush_tlb();
}
+
#endif
#ifdef CONFIG_BLK_DEV_INITRD
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64
2007-09-06 20:01 ` [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64 Mathieu Desnoyers
@ 2007-09-07 6:59 ` Andi Kleen
2007-09-07 14:04 ` Mathieu Desnoyers
2007-09-07 8:43 ` Ananth N Mavinakayanahalli
1 sibling, 1 reply; 20+ messages in thread
From: Andi Kleen @ 2007-09-07 6:59 UTC (permalink / raw)
To: Mathieu Desnoyers; +Cc: akpm, linux-kernel, Andi Kleen, pageexec
On Thu, Sep 06, 2007 at 04:01:29PM -0400, Mathieu Desnoyers wrote:
> + sync_core();
> + /* Not strictly needed, but can speed CPU recovery up. */
That turned out to break on some VIA CPUs. Should be removed.
> + if (cpu_has_clflush)
> + for (faddr = addr; faddr < addr + len;
> + faddr += boot_cpu_data.x86_clflush_size)
> + asm("clflush (%0) " :: "r" (faddr) : "memory");
> +}
> +
> +void * text_poke_early(void *addr, const void *opcode,
> + size_t len)
> +{
> + memcpy(addr, opcode, len);
It would be best to copy __inline_memcpy from x86-64 to i386
and use that here. That avoids the dependency on a patched
memcpy and is slightly safer.
> +
> + if (len > sizeof(long)) {
> + printk(KERN_ERR "text_poke of len %zu too big (max %lu)\n",
> + len, sizeof(long));
> + BUG_ON(1);
In general BUG_ON only should be enough because these values can
be recovered from the registers.
> + }
> + unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
> + - ((long)addr & ~(sizeof(long) - 1));
> + if (unlikely(unaligned)) {
> + printk(KERN_ERR "text_poke of at addr %p of len %zu is "
> + "unaligned (%d)\n",
> + addr, len, unaligned);
> + BUG_ON(1);
> + }
The common code should be in a common function. In fact they're so
similar that the caller could just pass a buffer for the text_set
case, couldn't it?
> +#define kernel_wp_save(cr0) \
Is there a real reason this has to be an macro? It could
be just a normal function. In fact a shared on in alternative.c.
That would also avoid adding more include dependencies.
> + do { \
> + typecheck(unsigned long, cr0); \
typecheck is probably overkill
> + preempt_disable(); \
Should disable interrupts too just to be safer?
-Andi
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64
2007-09-06 20:01 ` [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64 Mathieu Desnoyers
2007-09-07 6:59 ` Andi Kleen
@ 2007-09-07 8:43 ` Ananth N Mavinakayanahalli
2007-09-07 14:09 ` Mathieu Desnoyers
1 sibling, 1 reply; 20+ messages in thread
From: Ananth N Mavinakayanahalli @ 2007-09-07 8:43 UTC (permalink / raw)
To: Mathieu Desnoyers; +Cc: akpm, linux-kernel, Andi Kleen, pageexec
On Thu, Sep 06, 2007 at 04:01:29PM -0400, Mathieu Desnoyers wrote:
<snip>
> Index: linux-2.6-lttng/arch/i386/kernel/alternative.c
> ===================================================================
> --- linux-2.6-lttng.orig/arch/i386/kernel/alternative.c 2007-09-06 14:32:11.000000000 -0400
> +++ linux-2.6-lttng/arch/i386/kernel/alternative.c 2007-09-06 14:59:19.000000000 -0400
> @@ -16,6 +16,99 @@
> #ifdef CONFIG_HOTPLUG_CPU
> static int smp_alt_once;
>
> +/*
> + * Warning:
> + * When you use this code to patch more than one byte of an instruction
> + * you need to make sure that other CPUs cannot execute this code in parallel.
> + * Also no thread must be currently preempted in the middle of these
> + * instructions. And on the local CPU you need to be protected again NMI or MCE
> + * handlers seeing an inconsistent instruction while you patch.
> + * Warning: read_cr0 is modified by paravirt, this is why we have _early
> + * versions. They are not in the __init section because they can be used at
> + * module load time.
> + */
> +static inline void text_sync(void *addr, size_t len)
> +{
> + void *faddr;
> +
> + sync_core();
> + /* Not strictly needed, but can speed CPU recovery up. */
> + if (cpu_has_clflush)
> + for (faddr = addr; faddr < addr + len;
> + faddr += boot_cpu_data.x86_clflush_size)
> + asm("clflush (%0) " :: "r" (faddr) : "memory");
> +}
> +
> +void * text_poke_early(void *addr, const void *opcode,
> + size_t len)
> +{
> + memcpy(addr, opcode, len);
> + text_sync(addr, len);
> + return addr;
> +}
> +
> +void * text_set_early(void *addr, int c, size_t len)
> +{
> + memset(addr, c, len);
> + text_sync(addr, len);
> + return addr;
> +}
> +
> +/*
> + * Only atomic text poke/set should be allowed when not doing early patching.
> + * It means the size must be writable atomically and the address must be aligned
> + * in a way that permits an atomic write.
> + */
> +void * __kprobes text_poke(void *addr, const void *opcode, size_t len)
> +{
> + unsigned long cr0;
> + int unaligned;
> +
> + if (len > sizeof(long)) {
> + printk(KERN_ERR "text_poke of len %zu too big (max %lu)\n",
> + len, sizeof(long));
> + BUG_ON(1);
> + }
> + unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
> + - ((long)addr & ~(sizeof(long) - 1));
> + if (unlikely(unaligned)) {
> + printk(KERN_ERR "text_poke of at addr %p of len %zu is "
> + "unaligned (%d)\n",
> + addr, len, unaligned);
> + BUG_ON(1);
> + }
> + kernel_wp_save(cr0);
> + memcpy(addr, opcode, len);
> + kernel_wp_restore(cr0);
> + text_sync(addr, len);
> + return addr;
> +}
> +
> +void * __kprobes text_set(void *addr, int c, size_t len)
> +{
> + unsigned long cr0;
> + int unaligned;
> +
> + if (len > sizeof(long)) {
> + printk(KERN_ERR "text_set of len %zu too big (max %lu)\n",
> + len, sizeof(long));
> + BUG_ON(1);
> + }
> + unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
> + - ((long)addr & ~(sizeof(long) - 1));
> + if (unlikely(unaligned)) {
> + printk(KERN_ERR "text_set of at addr %p of len %zu is "
> + "unaligned (%d)\n",
> + addr, len, unaligned);
> + BUG_ON(1);
> + }
> + kernel_wp_save(cr0);
> + memset(addr, c, len);
> + kernel_wp_restore(cr0);
> + text_sync(addr, len);
> + return addr;
> +}
> +
The above chunk is within ifdef CONFIG_HOTPLUG_CPU. This breaks a CONFIG_SMP=n
build:
CHK include/linux/compile.h
UPD include/linux/compile.h
arch/i386/kernel/built-in.o: In function `apply_alternatives':
/home/ananth/kprobes/marker/linux-2.6.23-rc4/arch/i386/kernel/alternative.c:293:
undefined reference to `text_poke_early'
arch/i386/kernel/built-in.o: In function `arch_disarm_kprobe':
/home/ananth/kprobes/marker/linux-2.6.23-rc4/arch/i386/kernel/kprobes.c:185:
undefined reference to `text_set'
arch/i386/kernel/built-in.o: In function `arch_arm_kprobe':
/home/ananth/kprobes/marker/linux-2.6.23-rc4/arch/i386/kernel/kprobes.c:180:
undefined reference to `text_set'
make: *** [.tmp_vmlinux1] Error 1
Ananth
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 06/10] Text Edit Lock - kprobes architecture independent support
2007-09-06 20:01 ` [patch 06/10] Text Edit Lock - kprobes architecture independent support Mathieu Desnoyers
@ 2007-09-07 10:28 ` Ananth N Mavinakayanahalli
2007-09-07 14:13 ` Mathieu Desnoyers
0 siblings, 1 reply; 20+ messages in thread
From: Ananth N Mavinakayanahalli @ 2007-09-07 10:28 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: akpm, linux-kernel, prasanna, anil.s.keshavamurthy, davem
On Thu, Sep 06, 2007 at 04:01:30PM -0400, Mathieu Desnoyers wrote:
<snip>
> @@ -716,8 +720,9 @@ static int __kprobes pre_handler_kretpro
> struct kretprobe_instance, uflist);
> ri->rp = rp;
> ri->task = current;
> + kernel_text_lock();
> arch_prepare_kretprobe(ri, regs);
> -
> + kernel_text_unlock();
pre_handler_kretprobe() is run when the entry probe for a retprobed
function is hit and cannot block. You can't take a mutex here.
And why do we need to take the kernel_text_lock() here anyway? All
arch_prepare_kretprobe() does is modify the return address stored either
on stack or a register (arch specific) to the trampoline. We don't
change any kernel text here.
Ananth
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64
2007-09-07 6:59 ` Andi Kleen
@ 2007-09-07 14:04 ` Mathieu Desnoyers
2007-09-07 22:35 ` Andi Kleen
0 siblings, 1 reply; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-07 14:04 UTC (permalink / raw)
To: Andi Kleen; +Cc: akpm, linux-kernel, pageexec
* Andi Kleen (andi@firstfloor.org) wrote:
> On Thu, Sep 06, 2007 at 04:01:29PM -0400, Mathieu Desnoyers wrote:
> > + sync_core();
> > + /* Not strictly needed, but can speed CPU recovery up. */
>
> That turned out to break on some VIA CPUs. Should be removed.
>
Hrm, when does it break ? At boot time ? Is it the cpuid that breaks or
the clflush ? How do you work around the problem when sync_core or
clflush is called from elsewhere; does it cause a problem if I call it
when I update immediate values ?
> > + if (cpu_has_clflush)
> > + for (faddr = addr; faddr < addr + len;
> > + faddr += boot_cpu_data.x86_clflush_size)
> > + asm("clflush (%0) " :: "r" (faddr) : "memory");
> > +}
> > +
> > +void * text_poke_early(void *addr, const void *opcode,
> > + size_t len)
> > +{
> > + memcpy(addr, opcode, len);
>
> It would be best to copy __inline_memcpy from x86-64 to i386
> and use that here. That avoids the dependency on a patched
> memcpy and is slightly safer.
>
Is it me or __inline_memcpy is simply a copy of i386's __memcpy ?
Is there any reason for this name change ?
> > +
> > + if (len > sizeof(long)) {
> > + printk(KERN_ERR "text_poke of len %zu too big (max %lu)\n",
> > + len, sizeof(long));
> > + BUG_ON(1);
>
> In general BUG_ON only should be enough because these values can
> be recovered from the registers.
>
Ok.
> > + }
> > + unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
> > + - ((long)addr & ~(sizeof(long) - 1));
> > + if (unlikely(unaligned)) {
> > + printk(KERN_ERR "text_poke of at addr %p of len %zu is "
> > + "unaligned (%d)\n",
> > + addr, len, unaligned);
> > + BUG_ON(1);
> > + }
>
> The common code should be in a common function. In fact they're so
> similar that the caller could just pass a buffer for the text_set
> case, couldn't it?
>
I found out that doing a text_set is relatively common. What I want to
remove is things such as:
text_poke(addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}, 1);
which is :
A- ugly
B- breaking vim syntax highlighting. (actually, all the rest of the
file becomes weird after that. The problem is similar to declaration
of #defile name ({ some code }). It does not really matter as long as
it is in a header, but at the middle of a C file it gets rather
annoying). (it never though I would use vim as a coding style
reference) ;)
And what is rather different between the 2 functions is when we want to
fill multiple bytes with the same pattern (I fill the unused part of my
immediate values bypass with 0x90 nops, but I agree that I could use
add_nops if it was exported).
Declaration of a variable length array on text_set's stack would break
older compilers, so I don't think it is a neat solution neither. kmalloc
seems overkill to me.
I'll try to come up with a single static function, called from both
text_set and text_poke, that will merge the code and execute either
memset or memcpy depending on a supplementary argument.
>
> > +#define kernel_wp_save(cr0) \
>
> Is there a real reason this has to be an macro? It could
> be just a normal function. In fact a shared on in alternative.c.
> That would also avoid adding more include dependencies.
>
The idea is to mimic the local_irq_save/restore semantic, where the
flags argument is passed without &. This is why I use a macro instead of
an inline function.
> > + do { \
> > + typecheck(unsigned long, cr0); \
>
> typecheck is probably overkill
>
ok, I'll remove it.
> > + preempt_disable(); \
>
> Should disable interrupts too just to be safer?
>
Well, the only thing that we really don't want here is to be scheduled
to a different CPU, so preempt disable should be enough.
The good effect of disabling interrupts is that it would make sure no
interrupt handler will run with WP flag cleared on the CPU. However, it
would add a flags parameter to kernel_wp_save/restore which would be
rather ugly :( This is why I prefer to go with preempt_disable, but I am
open to other considerations.
Mathieu
> -Andi
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64
2007-09-07 8:43 ` Ananth N Mavinakayanahalli
@ 2007-09-07 14:09 ` Mathieu Desnoyers
0 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-07 14:09 UTC (permalink / raw)
To: Ananth N Mavinakayanahalli; +Cc: akpm, linux-kernel, Andi Kleen, pageexec
* Ananth N Mavinakayanahalli (ananth@in.ibm.com) wrote:
> On Thu, Sep 06, 2007 at 04:01:29PM -0400, Mathieu Desnoyers wrote:
>
> <snip>
>
> > Index: linux-2.6-lttng/arch/i386/kernel/alternative.c
> > ===================================================================
> > --- linux-2.6-lttng.orig/arch/i386/kernel/alternative.c 2007-09-06 14:32:11.000000000 -0400
> > +++ linux-2.6-lttng/arch/i386/kernel/alternative.c 2007-09-06 14:59:19.000000000 -0400
> > @@ -16,6 +16,99 @@
> > #ifdef CONFIG_HOTPLUG_CPU
> > static int smp_alt_once;
> >
> > +/*
> > + * Warning:
> > + * When you use this code to patch more than one byte of an instruction
> > + * you need to make sure that other CPUs cannot execute this code in parallel.
> > + * Also no thread must be currently preempted in the middle of these
> > + * instructions. And on the local CPU you need to be protected again NMI or MCE
> > + * handlers seeing an inconsistent instruction while you patch.
> > + * Warning: read_cr0 is modified by paravirt, this is why we have _early
> > + * versions. They are not in the __init section because they can be used at
> > + * module load time.
> > + */
> > +static inline void text_sync(void *addr, size_t len)
> > +{
> > + void *faddr;
> > +
> > + sync_core();
> > + /* Not strictly needed, but can speed CPU recovery up. */
> > + if (cpu_has_clflush)
> > + for (faddr = addr; faddr < addr + len;
> > + faddr += boot_cpu_data.x86_clflush_size)
> > + asm("clflush (%0) " :: "r" (faddr) : "memory");
> > +}
> > +
> > +void * text_poke_early(void *addr, const void *opcode,
> > + size_t len)
> > +{
> > + memcpy(addr, opcode, len);
> > + text_sync(addr, len);
> > + return addr;
> > +}
> > +
> > +void * text_set_early(void *addr, int c, size_t len)
> > +{
> > + memset(addr, c, len);
> > + text_sync(addr, len);
> > + return addr;
> > +}
> > +
> > +/*
> > + * Only atomic text poke/set should be allowed when not doing early patching.
> > + * It means the size must be writable atomically and the address must be aligned
> > + * in a way that permits an atomic write.
> > + */
> > +void * __kprobes text_poke(void *addr, const void *opcode, size_t len)
> > +{
> > + unsigned long cr0;
> > + int unaligned;
> > +
> > + if (len > sizeof(long)) {
> > + printk(KERN_ERR "text_poke of len %zu too big (max %lu)\n",
> > + len, sizeof(long));
> > + BUG_ON(1);
> > + }
> > + unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
> > + - ((long)addr & ~(sizeof(long) - 1));
> > + if (unlikely(unaligned)) {
> > + printk(KERN_ERR "text_poke of at addr %p of len %zu is "
> > + "unaligned (%d)\n",
> > + addr, len, unaligned);
> > + BUG_ON(1);
> > + }
> > + kernel_wp_save(cr0);
> > + memcpy(addr, opcode, len);
> > + kernel_wp_restore(cr0);
> > + text_sync(addr, len);
> > + return addr;
> > +}
> > +
> > +void * __kprobes text_set(void *addr, int c, size_t len)
> > +{
> > + unsigned long cr0;
> > + int unaligned;
> > +
> > + if (len > sizeof(long)) {
> > + printk(KERN_ERR "text_set of len %zu too big (max %lu)\n",
> > + len, sizeof(long));
> > + BUG_ON(1);
> > + }
> > + unaligned = (((long)addr + len - 1) & ~(sizeof(long) - 1))
> > + - ((long)addr & ~(sizeof(long) - 1));
> > + if (unlikely(unaligned)) {
> > + printk(KERN_ERR "text_set of at addr %p of len %zu is "
> > + "unaligned (%d)\n",
> > + addr, len, unaligned);
> > + BUG_ON(1);
> > + }
> > + kernel_wp_save(cr0);
> > + memset(addr, c, len);
> > + kernel_wp_restore(cr0);
> > + text_sync(addr, len);
> > + return addr;
> > +}
> > +
>
> The above chunk is within ifdef CONFIG_HOTPLUG_CPU. This breaks a CONFIG_SMP=n
> build:
>
> CHK include/linux/compile.h
> UPD include/linux/compile.h
> arch/i386/kernel/built-in.o: In function `apply_alternatives':
> /home/ananth/kprobes/marker/linux-2.6.23-rc4/arch/i386/kernel/alternative.c:293:
> undefined reference to `text_poke_early'
> arch/i386/kernel/built-in.o: In function `arch_disarm_kprobe':
> /home/ananth/kprobes/marker/linux-2.6.23-rc4/arch/i386/kernel/kprobes.c:185:
> undefined reference to `text_set'
> arch/i386/kernel/built-in.o: In function `arch_arm_kprobe':
> /home/ananth/kprobes/marker/linux-2.6.23-rc4/arch/i386/kernel/kprobes.c:180:
> undefined reference to `text_set'
> make: *** [.tmp_vmlinux1] Error 1
>
> Ananth
Thanks, fixing for next release.
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 06/10] Text Edit Lock - kprobes architecture independent support
2007-09-07 10:28 ` Ananth N Mavinakayanahalli
@ 2007-09-07 14:13 ` Mathieu Desnoyers
0 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-07 14:13 UTC (permalink / raw)
To: Ananth N Mavinakayanahalli
Cc: akpm, linux-kernel, prasanna, anil.s.keshavamurthy, davem
* Ananth N Mavinakayanahalli (ananth@in.ibm.com) wrote:
> On Thu, Sep 06, 2007 at 04:01:30PM -0400, Mathieu Desnoyers wrote:
>
> <snip>
>
> > @@ -716,8 +720,9 @@ static int __kprobes pre_handler_kretpro
> > struct kretprobe_instance, uflist);
> > ri->rp = rp;
> > ri->task = current;
> > + kernel_text_lock();
> > arch_prepare_kretprobe(ri, regs);
> > -
> > + kernel_text_unlock();
>
> pre_handler_kretprobe() is run when the entry probe for a retprobed
> function is hit and cannot block. You can't take a mutex here.
>
> And why do we need to take the kernel_text_lock() here anyway? All
> arch_prepare_kretprobe() does is modify the return address stored either
> on stack or a register (arch specific) to the trampoline. We don't
> change any kernel text here.
>
Yep, good catch. Fixing.
> Ananth
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64
2007-09-07 14:04 ` Mathieu Desnoyers
@ 2007-09-07 22:35 ` Andi Kleen
2007-09-11 19:59 ` Mathieu Desnoyers
0 siblings, 1 reply; 20+ messages in thread
From: Andi Kleen @ 2007-09-07 22:35 UTC (permalink / raw)
To: Mathieu Desnoyers; +Cc: Andi Kleen, akpm, linux-kernel, pageexec
On Fri, Sep 07, 2007 at 10:04:42AM -0400, Mathieu Desnoyers wrote:
> * Andi Kleen (andi@firstfloor.org) wrote:
> > On Thu, Sep 06, 2007 at 04:01:29PM -0400, Mathieu Desnoyers wrote:
> > > + sync_core();
> > > + /* Not strictly needed, but can speed CPU recovery up. */
> >
> > That turned out to break on some VIA CPUs. Should be removed.
> >
>
> Hrm, when does it break ? At boot time ? Is it the cpuid that breaks or
Yes.
> the clflush ? How do you work around the problem when sync_core or
The CLFLUSH
> clflush is called from elsewhere; does it cause a problem if I call it
> when I update immediate values ?
Unknown currently what are the exact circumstances.
For the other cases it is ignored right now, but when we get
more information it might be needed to clear the CLFLUSH
feature bit on those CPUs.
> Is it me or __inline_memcpy is simply a copy of i386's __memcpy ?
> Is there any reason for this name change ?
x86-64 __memcpy does something different.
It might make more sense
At some point I hope to change the i386 setup to be more like x86-64
anyways -- the x86-64 version is imho much better.
> A- ugly
> B- breaking vim syntax highlighting. (actually, all the rest of the
> file becomes weird after that. The problem is similar to declaration
> of #defile name ({ some code }). It does not really matter as long as
> it is in a header, but at the middle of a C file it gets rather
> annoying). (it never though I would use vim as a coding style
> reference) ;)
Then define a macro
#define BREAKPOINTS(x) \
((unsigned char [x]){ [0 ... x] = BREAKPOINT_INSTRUCTIONS })
and use that
> And what is rather different between the 2 functions is when we want to
> fill multiple bytes with the same pattern (I fill the unused part of my
> immediate values bypass with 0x90 nops, but I agree that I could use
> add_nops if it was exported).
>
> Declaration of a variable length array on text_set's stack would break
> older compilers, so I don't think it is a neat solution neither. kmalloc
All supported gccs support variable length arrays.
> The idea is to mimic the local_irq_save/restore semantic, where the
> flags argument is passed without &. This is why I use a macro instead of
> an inline function
Sounds like a bogus idea to me.
> The good effect of disabling interrupts is that it would make sure no
> interrupt handler will run with WP flag cleared on the CPU. However, it
Yes that was my point. Not a very strong one admittedly.
-Andi
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64
2007-09-07 22:35 ` Andi Kleen
@ 2007-09-11 19:59 ` Mathieu Desnoyers
0 siblings, 0 replies; 20+ messages in thread
From: Mathieu Desnoyers @ 2007-09-11 19:59 UTC (permalink / raw)
To: Andi Kleen; +Cc: akpm, linux-kernel, pageexec
* Andi Kleen (andi@firstfloor.org) wrote:
> On Fri, Sep 07, 2007 at 10:04:42AM -0400, Mathieu Desnoyers wrote:
> > * Andi Kleen (andi@firstfloor.org) wrote:
> > > On Thu, Sep 06, 2007 at 04:01:29PM -0400, Mathieu Desnoyers wrote:
> > > > + sync_core();
> > > > + /* Not strictly needed, but can speed CPU recovery up. */
> > >
> > > That turned out to break on some VIA CPUs. Should be removed.
> > >
> >
> > Hrm, when does it break ? At boot time ? Is it the cpuid that breaks or
>
> Yes.
>
> > the clflush ? How do you work around the problem when sync_core or
>
> The CLFLUSH
>
> > clflush is called from elsewhere; does it cause a problem if I call it
> > when I update immediate values ?
>
> Unknown currently what are the exact circumstances.
>
> For the other cases it is ignored right now, but when we get
> more information it might be needed to clear the CLFLUSH
> feature bit on those CPUs.
>
Ok, it's better to simply remove the CLFLUSH since it's not needed
anyway. Do you recommend to only remove the CLFLUSH from the _early code
(executed before alternatives are applied) or to remove the CLFLUSH from
the normal execution time code also ?
> > Is it me or __inline_memcpy is simply a copy of i386's __memcpy ?
> > Is there any reason for this name change ?
>
> x86-64 __memcpy does something different.
>
> It might make more sense
>
> At some point I hope to change the i386 setup to be more like x86-64
> anyways -- the x86-64 version is imho much better.
>
It looks like gorund work that should be done in i386 before we start
using __inline_memcpy in alternative.c which is shared between i386 and
x86_64. I haven't seen the alternative that would touch memcpy at all
anyway, am I missing something ? Also, being "faster" is not really an
issue there, since it is not done often. The only thing that matters is
if memcpy could be touched by alternative.c.
> > A- ugly
> > B- breaking vim syntax highlighting. (actually, all the rest of the
> > file becomes weird after that. The problem is similar to declaration
> > of #defile name ({ some code }). It does not really matter as long as
> > it is in a header, but at the middle of a C file it gets rather
> > annoying). (it never though I would use vim as a coding style
> > reference) ;)
>
> Then define a macro
>
> #define BREAKPOINTS(x) \
> ((unsigned char [x]){ [0 ... x] = BREAKPOINT_INSTRUCTIONS })
>
> and use that
>
How about adding :
#define INIT_ARRAY(type, val, len) ((type [len]){ [0 ... len] = (val) })
to kernel.h ? It would be more generic.
> > And what is rather different between the 2 functions is when we want to
> > fill multiple bytes with the same pattern (I fill the unused part of my
> > immediate values bypass with 0x90 nops, but I agree that I could use
> > add_nops if it was exported).
> >
> > Declaration of a variable length array on text_set's stack would break
> > older compilers, so I don't think it is a neat solution neither. kmalloc
>
> All supported gccs support variable length arrays.
>
ok
> > The idea is to mimic the local_irq_save/restore semantic, where the
> > flags argument is passed without &. This is why I use a macro instead of
> > an inline function
>
> Sounds like a bogus idea to me.
>
So you would prefer
unsigned long cr0;
kernel_wp_save(&cr0)
..
kernel_wp_restore(cr0);
to
unsigned long cr0;
kernel_wp_save(cr0)
..
kernel_wp_restore(cr0);
? It seems odd to me, since everyone would expect flags save/restore to
behave like local_irq_save/restore. Or I may misunderstand your point.
> > The good effect of disabling interrupts is that it would make sure no
> > interrupt handler will run with WP flag cleared on the CPU. However, it
>
> Yes that was my point. Not a very strong one admittedly.
>
I agree that most kernel_wp_save/restore users should disable interrupts
before calling it (it would be a "good practice"), but I don't expect to
encapsulate irq disabling in these macros, since it is not mandatory.
Mathieu
> -Andi
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2007-09-11 19:59 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-06 20:01 [patch 00/10] Text Edit Lock for 2.6.23-rc4-mm1 Mathieu Desnoyers
2007-09-06 20:01 ` [patch 01/10] Kprobes - use a mutex to protect the instruction pages list Mathieu Desnoyers
2007-09-06 20:01 ` [patch 02/10] Kprobes - do not use kprobes mutex in arch code Mathieu Desnoyers
2007-09-06 20:01 ` [patch 03/10] Kprobes - declare kprobe_mutex static Mathieu Desnoyers
2007-09-06 20:01 ` [patch 04/10] Text Edit Lock - Architecture Independent Code Mathieu Desnoyers
2007-09-06 20:01 ` [patch 05/10] Text Edit Lock - Alternative code for i386 and x86_64 Mathieu Desnoyers
2007-09-07 6:59 ` Andi Kleen
2007-09-07 14:04 ` Mathieu Desnoyers
2007-09-07 22:35 ` Andi Kleen
2007-09-11 19:59 ` Mathieu Desnoyers
2007-09-07 8:43 ` Ananth N Mavinakayanahalli
2007-09-07 14:09 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 06/10] Text Edit Lock - kprobes architecture independent support Mathieu Desnoyers
2007-09-07 10:28 ` Ananth N Mavinakayanahalli
2007-09-07 14:13 ` Mathieu Desnoyers
2007-09-06 20:01 ` [patch 07/10] Text Edit Lock - kprobes i386 Mathieu Desnoyers
2007-09-06 20:01 ` [patch 08/10] Text Edit Lock - kprobes x86_64 Mathieu Desnoyers
2007-09-06 20:01 ` [patch 09/10] Text Edit Lock - i386 standardize debug rodata Mathieu Desnoyers
2007-09-06 20:01 ` [patch 10/10] Text Edit Lock - x86_64 " Mathieu Desnoyers
-- strict thread matches above, loose matches on Subject: below --
2007-08-27 15:56 [patch 00/10] Text Edit Lock Mathieu Desnoyers
2007-08-27 15:56 ` [patch 10/10] Text Edit Lock - x86_64 standardize debug rodata Mathieu Desnoyers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox