* [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary
@ 2012-11-22 8:22 Zhang Yanfei
[not found] ` <50ADE0C2.1000106-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Zhang Yanfei @ 2012-11-22 8:22 UTC (permalink / raw)
To: x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
Marcelo Tosatti, Gleb Natapov
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Currently, kdump just makes all the logical processors leave VMX operation by
executing VMXOFF instruction, so any VMCSs active on the logical processors may
be corrupted. But, sometimes, we need the VMCSs to debug guest images contained
in the host vmcore. To prevent the corruption, we should VMCLEAR the VMCSs before
executing the VMXOFF instruction.
The patch set provides a way to VMCLEAR vmcss related to guests on all cpus before
executing the VMXOFF when doing kdump. This is used to ensure the VMCSs in the
vmcore updated and non-corrupted.
Changelog from v7 to v8:
1. KEXEC: regression for using name crash_notifier_list
and remove comments related to KVM
and just call function atomic_notifier_call_chain directly.
Changelog from v6 to v7:
1. KVM-INTEL: in hardware_disable, we needn't disable the
vmclear, so remove it.
Changelog from v5 to v6:
1. KEXEC: the atomic notifier list renamed:
crash_notifier_list --> vmclear_notifier_list
2. KVM-INTEL: provide empty functions if CONFIG_KEXEC is
not defined and remove unnecessary #ifdef's.
Changelog from v4 to v5:
1. use an atomic notifier instead of function call, so
have all the vmclear codes in vmx.c.
Changelog from v3 to v4:
1. add a new percpu variable vmclear_skipped to skip
vmclear in kdump in some conditions.
Changelog from v2 to v3:
1. remove unnecessary conditions in function
cpu_emergency_clear_loaded_vmcss as Marcelo suggested.
Changelog from v1 to v2:
1. remove the sysctl and clear VMCSs unconditionally.
Zhang Yanfei (2):
x86/kexec: add a new atomic notifier list for kdump
KVM-INTEL: add a notifier and a bitmap to support VMCLEAR in kdump
arch/x86/include/asm/kexec.h | 2 +
arch/x86/kernel/crash.c | 9 +++++
arch/x86/kvm/vmx.c | 70 ++++++++++++++++++++++++++++++++++++++++++
3 files changed, 81 insertions(+), 0 deletions(-)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
[not found] ` <50ADE0C2.1000106-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2012-11-22 8:23 ` Zhang Yanfei
2012-11-26 15:08 ` Eric W. Biederman
2012-11-22 8:25 ` [PATCH v8 2/2] KVM-INTEL: add a notifier and a bitmap to support VMCLEAR in kdump Zhang Yanfei
1 sibling, 1 reply; 14+ messages in thread
From: Zhang Yanfei @ 2012-11-22 8:23 UTC (permalink / raw)
To: x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
Marcelo Tosatti, Gleb Natapov
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
This patch adds an atomic notifier list named crash_notifier_list.
Currently, when loading kvm-intel module, a notifier will be registered
in the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
needed.
Signed-off-by: Zhang Yanfei <zhangyanfei-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
arch/x86/include/asm/kexec.h | 2 ++
arch/x86/kernel/crash.c | 9 +++++++++
2 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 317ff17..5e22b00 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -163,6 +163,8 @@ struct kimage_arch {
};
#endif
+extern struct atomic_notifier_head crash_notifier_list;
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_X86_KEXEC_H */
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 13ad899..c5b2f70 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -16,6 +16,8 @@
#include <linux/delay.h>
#include <linux/elf.h>
#include <linux/elfcore.h>
+#include <linux/module.h>
+#include <linux/notifier.h>
#include <asm/processor.h>
#include <asm/hardirq.h>
@@ -30,6 +32,9 @@
int in_crash_kexec;
+ATOMIC_NOTIFIER_HEAD(crash_notifier_list);
+EXPORT_SYMBOL_GPL(crash_notifier_list);
+
#if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
@@ -46,6 +51,8 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
#endif
crash_save_cpu(regs, cpu);
+ atomic_notifier_call_chain(&crash_notifier_list, 0, NULL);
+
/* Disable VMX or SVM if needed.
*
* We need to disable virtualization on all CPUs.
@@ -88,6 +95,8 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
kdump_nmi_shootdown_cpus();
+ atomic_notifier_call_chain(&crash_notifier_list, 0, NULL);
+
/* Booting kdump kernel with VMX or SVM enabled won't work,
* because (among other limitations) we can't disable paging
* with the virt flags.
--
1.7.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v8 2/2] KVM-INTEL: add a notifier and a bitmap to support VMCLEAR in kdump
[not found] ` <50ADE0C2.1000106-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2012-11-22 8:23 ` [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump Zhang Yanfei
@ 2012-11-22 8:25 ` Zhang Yanfei
1 sibling, 0 replies; 14+ messages in thread
From: Zhang Yanfei @ 2012-11-22 8:25 UTC (permalink / raw)
To: x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
Marcelo Tosatti, Gleb Natapov
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
The notifier will be registered in crash_notifier_list when loading
kvm-intel module. And the bitmap indicates whether we should do
VMCLEAR operation in kdump. The bits in the bitmap are set/unset
according to different conditions.
Signed-off-by: Zhang Yanfei <zhangyanfei-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
arch/x86/kvm/vmx.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 70 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4ff0ab9..d890372 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -41,6 +41,7 @@
#include <asm/i387.h>
#include <asm/xcr.h>
#include <asm/perf_event.h>
+#include <asm/kexec.h>
#include "trace.h"
@@ -963,6 +964,49 @@ static void vmcs_load(struct vmcs *vmcs)
vmcs, phys_addr);
}
+#ifdef CONFIG_KEXEC
+/*
+ * This bitmap is used to indicate whether the vmclear
+ * operation is enabled on all cpus. All disabled by
+ * default.
+ */
+static cpumask_t crash_vmclear_enabled_bitmap = CPU_MASK_NONE;
+
+static inline void crash_enable_local_vmclear(int cpu)
+{
+ cpumask_set_cpu(cpu, &crash_vmclear_enabled_bitmap);
+}
+
+static inline void crash_disable_local_vmclear(int cpu)
+{
+ cpumask_clear_cpu(cpu, &crash_vmclear_enabled_bitmap);
+}
+
+static inline int crash_local_vmclear_enabled(int cpu)
+{
+ return cpumask_test_cpu(cpu, &crash_vmclear_enabled_bitmap);
+}
+
+static void vmclear_local_loaded_vmcss(void);
+static int crash_vmclear_local_loaded_vmcss(struct notifier_block *this,
+ unsigned long val, void *ptr)
+{
+ int cpu = raw_smp_processor_id();
+
+ if (crash_local_vmclear_enabled(cpu))
+ vmclear_local_loaded_vmcss();
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block crash_vmclear_notifier = {
+ .notifier_call = crash_vmclear_local_loaded_vmcss,
+};
+#else
+static inline void crash_enable_local_vmclear(int cpu) { }
+static inline void crash_disable_local_vmclear(int cpu) { }
+#endif /* CONFIG_KEXEC */
+
static void __loaded_vmcs_clear(void *arg)
{
struct loaded_vmcs *loaded_vmcs = arg;
@@ -972,8 +1016,10 @@ static void __loaded_vmcs_clear(void *arg)
return; /* vcpu migration can race with cpu offline */
if (per_cpu(current_vmcs, cpu) == loaded_vmcs->vmcs)
per_cpu(current_vmcs, cpu) = NULL;
+ crash_disable_local_vmclear(cpu);
list_del(&loaded_vmcs->loaded_vmcss_on_cpu_link);
loaded_vmcs_init(loaded_vmcs);
+ crash_enable_local_vmclear(cpu);
}
static void loaded_vmcs_clear(struct loaded_vmcs *loaded_vmcs)
@@ -1491,8 +1537,10 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
local_irq_disable();
+ crash_disable_local_vmclear(cpu);
list_add(&vmx->loaded_vmcs->loaded_vmcss_on_cpu_link,
&per_cpu(loaded_vmcss_on_cpu, cpu));
+ crash_enable_local_vmclear(cpu);
local_irq_enable();
/*
@@ -2302,6 +2350,18 @@ static int hardware_enable(void *garbage)
return -EBUSY;
INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
+
+ /*
+ * Now we can enable the vmclear operation in kdump
+ * since the loaded_vmcss_on_cpu list on this cpu
+ * has been initialized.
+ *
+ * Though the cpu is not in VMX operation now, there
+ * is no problem to enable the vmclear operation
+ * for the loaded_vmcss_on_cpu list is empty!
+ */
+ crash_enable_local_vmclear(cpu);
+
rdmsrl(MSR_IA32_FEATURE_CONTROL, old);
test_bits = FEATURE_CONTROL_LOCKED;
@@ -7230,6 +7290,11 @@ static int __init vmx_init(void)
if (r)
goto out3;
+#ifdef CONFIG_KEXEC
+ atomic_notifier_chain_register(&crash_notifier_list,
+ &crash_vmclear_notifier);
+#endif
+
vmx_disable_intercept_for_msr(MSR_FS_BASE, false);
vmx_disable_intercept_for_msr(MSR_GS_BASE, false);
vmx_disable_intercept_for_msr(MSR_KERNEL_GS_BASE, true);
@@ -7265,6 +7330,11 @@ static void __exit vmx_exit(void)
free_page((unsigned long)vmx_io_bitmap_b);
free_page((unsigned long)vmx_io_bitmap_a);
+#ifdef CONFIG_KEXEC
+ atomic_notifier_chain_unregister(&crash_notifier_list,
+ &crash_vmclear_notifier);
+#endif
+
kvm_exit();
}
--
1.7.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary
2012-11-22 8:22 [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary Zhang Yanfei
[not found] ` <50ADE0C2.1000106-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2012-11-25 14:26 ` Gleb Natapov
[not found] ` <20121125142642.GC25516-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-11-26 1:55 ` Zhang Yanfei
2 siblings, 1 reply; 14+ messages in thread
From: Gleb Natapov @ 2012-11-25 14:26 UTC (permalink / raw)
To: Zhang Yanfei
Cc: x86@kernel.org, kexec@lists.infradead.org, Marcelo Tosatti,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org
On Thu, Nov 22, 2012 at 04:22:26PM +0800, Zhang Yanfei wrote:
> Currently, kdump just makes all the logical processors leave VMX operation by
> executing VMXOFF instruction, so any VMCSs active on the logical processors may
> be corrupted. But, sometimes, we need the VMCSs to debug guest images contained
> in the host vmcore. To prevent the corruption, we should VMCLEAR the VMCSs before
> executing the VMXOFF instruction.
>
> The patch set provides a way to VMCLEAR vmcss related to guests on all cpus before
> executing the VMXOFF when doing kdump. This is used to ensure the VMCSs in the
> vmcore updated and non-corrupted.
>
Looks fine to me, but kexec part should get acks from exec maintainers.
> Changelog from v7 to v8:
> 1. KEXEC: regression for using name crash_notifier_list
> and remove comments related to KVM
> and just call function atomic_notifier_call_chain directly.
>
> Changelog from v6 to v7:
> 1. KVM-INTEL: in hardware_disable, we needn't disable the
> vmclear, so remove it.
>
> Changelog from v5 to v6:
> 1. KEXEC: the atomic notifier list renamed:
> crash_notifier_list --> vmclear_notifier_list
> 2. KVM-INTEL: provide empty functions if CONFIG_KEXEC is
> not defined and remove unnecessary #ifdef's.
>
> Changelog from v4 to v5:
> 1. use an atomic notifier instead of function call, so
> have all the vmclear codes in vmx.c.
>
> Changelog from v3 to v4:
> 1. add a new percpu variable vmclear_skipped to skip
> vmclear in kdump in some conditions.
>
> Changelog from v2 to v3:
> 1. remove unnecessary conditions in function
> cpu_emergency_clear_loaded_vmcss as Marcelo suggested.
>
> Changelog from v1 to v2:
> 1. remove the sysctl and clear VMCSs unconditionally.
>
> Zhang Yanfei (2):
> x86/kexec: add a new atomic notifier list for kdump
> KVM-INTEL: add a notifier and a bitmap to support VMCLEAR in kdump
>
> arch/x86/include/asm/kexec.h | 2 +
> arch/x86/kernel/crash.c | 9 +++++
> arch/x86/kvm/vmx.c | 70 ++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 81 insertions(+), 0 deletions(-)
--
Gleb.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary
[not found] ` <20121125142642.GC25516-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2012-11-26 1:50 ` Zhang Yanfei
0 siblings, 0 replies; 14+ messages in thread
From: Zhang Yanfei @ 2012-11-26 1:50 UTC (permalink / raw)
To: Gleb Natapov
Cc: x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Marcelo Tosatti
于 2012年11月25日 22:26, Gleb Natapov 写道:
> On Thu, Nov 22, 2012 at 04:22:26PM +0800, Zhang Yanfei wrote:
>> Currently, kdump just makes all the logical processors leave VMX operation by
>> executing VMXOFF instruction, so any VMCSs active on the logical processors may
>> be corrupted. But, sometimes, we need the VMCSs to debug guest images contained
>> in the host vmcore. To prevent the corruption, we should VMCLEAR the VMCSs before
>> executing the VMXOFF instruction.
>>
>> The patch set provides a way to VMCLEAR vmcss related to guests on all cpus before
>> executing the VMXOFF when doing kdump. This is used to ensure the VMCSs in the
>> vmcore updated and non-corrupted.
>>
> Looks fine to me, but kexec part should get acks from exec maintainers.
Thanks. I will ask the exec maintainer.
>
>> Changelog from v7 to v8:
>> 1. KEXEC: regression for using name crash_notifier_list
>> and remove comments related to KVM
>> and just call function atomic_notifier_call_chain directly.
>>
>> Changelog from v6 to v7:
>> 1. KVM-INTEL: in hardware_disable, we needn't disable the
>> vmclear, so remove it.
>>
>> Changelog from v5 to v6:
>> 1. KEXEC: the atomic notifier list renamed:
>> crash_notifier_list --> vmclear_notifier_list
>> 2. KVM-INTEL: provide empty functions if CONFIG_KEXEC is
>> not defined and remove unnecessary #ifdef's.
>>
>> Changelog from v4 to v5:
>> 1. use an atomic notifier instead of function call, so
>> have all the vmclear codes in vmx.c.
>>
>> Changelog from v3 to v4:
>> 1. add a new percpu variable vmclear_skipped to skip
>> vmclear in kdump in some conditions.
>>
>> Changelog from v2 to v3:
>> 1. remove unnecessary conditions in function
>> cpu_emergency_clear_loaded_vmcss as Marcelo suggested.
>>
>> Changelog from v1 to v2:
>> 1. remove the sysctl and clear VMCSs unconditionally.
>>
>> Zhang Yanfei (2):
>> x86/kexec: add a new atomic notifier list for kdump
>> KVM-INTEL: add a notifier and a bitmap to support VMCLEAR in kdump
>>
>> arch/x86/include/asm/kexec.h | 2 +
>> arch/x86/kernel/crash.c | 9 +++++
>> arch/x86/kvm/vmx.c | 70 ++++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 81 insertions(+), 0 deletions(-)
>
> --
> Gleb.
>
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary
2012-11-22 8:22 [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary Zhang Yanfei
[not found] ` <50ADE0C2.1000106-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2012-11-25 14:26 ` [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary Gleb Natapov
@ 2012-11-26 1:55 ` Zhang Yanfei
2 siblings, 0 replies; 14+ messages in thread
From: Zhang Yanfei @ 2012-11-26 1:55 UTC (permalink / raw)
To: Eric W. Biederman
Cc: x86@kernel.org, kexec@lists.infradead.org, Marcelo Tosatti,
Gleb Natapov, linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Hello Eric
Would you help review the patch, and give some comments, please ?
Thanks
Zhang
于 2012年11月22日 16:22, Zhang Yanfei 写道:
> Currently, kdump just makes all the logical processors leave VMX operation by
> executing VMXOFF instruction, so any VMCSs active on the logical processors may
> be corrupted. But, sometimes, we need the VMCSs to debug guest images contained
> in the host vmcore. To prevent the corruption, we should VMCLEAR the VMCSs before
> executing the VMXOFF instruction.
>
> The patch set provides a way to VMCLEAR vmcss related to guests on all cpus before
> executing the VMXOFF when doing kdump. This is used to ensure the VMCSs in the
> vmcore updated and non-corrupted.
>
> Changelog from v7 to v8:
> 1. KEXEC: regression for using name crash_notifier_list
> and remove comments related to KVM
> and just call function atomic_notifier_call_chain directly.
>
> Changelog from v6 to v7:
> 1. KVM-INTEL: in hardware_disable, we needn't disable the
> vmclear, so remove it.
>
> Changelog from v5 to v6:
> 1. KEXEC: the atomic notifier list renamed:
> crash_notifier_list --> vmclear_notifier_list
> 2. KVM-INTEL: provide empty functions if CONFIG_KEXEC is
> not defined and remove unnecessary #ifdef's.
>
> Changelog from v4 to v5:
> 1. use an atomic notifier instead of function call, so
> have all the vmclear codes in vmx.c.
>
> Changelog from v3 to v4:
> 1. add a new percpu variable vmclear_skipped to skip
> vmclear in kdump in some conditions.
>
> Changelog from v2 to v3:
> 1. remove unnecessary conditions in function
> cpu_emergency_clear_loaded_vmcss as Marcelo suggested.
>
> Changelog from v1 to v2:
> 1. remove the sysctl and clear VMCSs unconditionally.
>
> Zhang Yanfei (2):
> x86/kexec: add a new atomic notifier list for kdump
> KVM-INTEL: add a notifier and a bitmap to support VMCLEAR in kdump
>
> arch/x86/include/asm/kexec.h | 2 +
> arch/x86/kernel/crash.c | 9 +++++
> arch/x86/kvm/vmx.c | 70 ++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 81 insertions(+), 0 deletions(-)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
2012-11-22 8:23 ` [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump Zhang Yanfei
@ 2012-11-26 15:08 ` Eric W. Biederman
[not found] ` <87ip8sxuyh.fsf-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Eric W. Biederman @ 2012-11-26 15:08 UTC (permalink / raw)
To: Zhang Yanfei
Cc: x86@kernel.org, kexec@lists.infradead.org, Marcelo Tosatti,
Gleb Natapov, linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Zhang Yanfei <zhangyanfei@cn.fujitsu.com> writes:
> This patch adds an atomic notifier list named crash_notifier_list.
> Currently, when loading kvm-intel module, a notifier will be registered
> in the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
> needed.
crash_notifier_list ick gag please no. Effectively this makes the kexec
on panic code path undebuggable.
Instead we need to use direct function calls to whatever you are doing.
If a direct function call is too complex then the piece of code you want
to call is almost certainly too complex to be calling on a code path
like this.
Eric
> Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
> ---
> arch/x86/include/asm/kexec.h | 2 ++
> arch/x86/kernel/crash.c | 9 +++++++++
> 2 files changed, 11 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
> index 317ff17..5e22b00 100644
> --- a/arch/x86/include/asm/kexec.h
> +++ b/arch/x86/include/asm/kexec.h
> @@ -163,6 +163,8 @@ struct kimage_arch {
> };
> #endif
>
> +extern struct atomic_notifier_head crash_notifier_list;
> +
> #endif /* __ASSEMBLY__ */
>
> #endif /* _ASM_X86_KEXEC_H */
> diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
> index 13ad899..c5b2f70 100644
> --- a/arch/x86/kernel/crash.c
> +++ b/arch/x86/kernel/crash.c
> @@ -16,6 +16,8 @@
> #include <linux/delay.h>
> #include <linux/elf.h>
> #include <linux/elfcore.h>
> +#include <linux/module.h>
> +#include <linux/notifier.h>
>
> #include <asm/processor.h>
> #include <asm/hardirq.h>
> @@ -30,6 +32,9 @@
>
> int in_crash_kexec;
>
> +ATOMIC_NOTIFIER_HEAD(crash_notifier_list);
> +EXPORT_SYMBOL_GPL(crash_notifier_list);
> +
> #if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
>
> static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
> @@ -46,6 +51,8 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
> #endif
> crash_save_cpu(regs, cpu);
>
> + atomic_notifier_call_chain(&crash_notifier_list, 0, NULL);
> +
> /* Disable VMX or SVM if needed.
> *
> * We need to disable virtualization on all CPUs.
> @@ -88,6 +95,8 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
>
> kdump_nmi_shootdown_cpus();
>
> + atomic_notifier_call_chain(&crash_notifier_list, 0, NULL);
> +
> /* Booting kdump kernel with VMX or SVM enabled won't work,
> * because (among other limitations) we can't disable paging
> * with the virt flags.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
[not found] ` <87ip8sxuyh.fsf-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org>
@ 2012-11-26 17:20 ` Gleb Natapov
[not found] ` <20121126172054.GF12969-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Gleb Natapov @ 2012-11-26 17:20 UTC (permalink / raw)
To: Eric W. Biederman
Cc: Marcelo Tosatti, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Zhang Yanfei
On Mon, Nov 26, 2012 at 09:08:54AM -0600, Eric W. Biederman wrote:
> Zhang Yanfei <zhangyanfei-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> writes:
>
> > This patch adds an atomic notifier list named crash_notifier_list.
> > Currently, when loading kvm-intel module, a notifier will be registered
> > in the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
> > needed.
>
> crash_notifier_list ick gag please no. Effectively this makes the kexec
> on panic code path undebuggable.
>
> Instead we need to use direct function calls to whatever you are doing.
>
The code walks linked list in kvm-intel module and calls vmclear on
whatever it finds there. Since the function have to resides in kvm-intel
module it cannot be called directly. Is callback pointer that is set
by kvm-intel more acceptable?
> If a direct function call is too complex then the piece of code you want
> to call is almost certainly too complex to be calling on a code path
> like this.
>
> Eric
>
> > Signed-off-by: Zhang Yanfei <zhangyanfei-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> > ---
> > arch/x86/include/asm/kexec.h | 2 ++
> > arch/x86/kernel/crash.c | 9 +++++++++
> > 2 files changed, 11 insertions(+), 0 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
> > index 317ff17..5e22b00 100644
> > --- a/arch/x86/include/asm/kexec.h
> > +++ b/arch/x86/include/asm/kexec.h
> > @@ -163,6 +163,8 @@ struct kimage_arch {
> > };
> > #endif
> >
> > +extern struct atomic_notifier_head crash_notifier_list;
> > +
> > #endif /* __ASSEMBLY__ */
> >
> > #endif /* _ASM_X86_KEXEC_H */
> > diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
> > index 13ad899..c5b2f70 100644
> > --- a/arch/x86/kernel/crash.c
> > +++ b/arch/x86/kernel/crash.c
> > @@ -16,6 +16,8 @@
> > #include <linux/delay.h>
> > #include <linux/elf.h>
> > #include <linux/elfcore.h>
> > +#include <linux/module.h>
> > +#include <linux/notifier.h>
> >
> > #include <asm/processor.h>
> > #include <asm/hardirq.h>
> > @@ -30,6 +32,9 @@
> >
> > int in_crash_kexec;
> >
> > +ATOMIC_NOTIFIER_HEAD(crash_notifier_list);
> > +EXPORT_SYMBOL_GPL(crash_notifier_list);
> > +
> > #if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
> >
> > static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
> > @@ -46,6 +51,8 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
> > #endif
> > crash_save_cpu(regs, cpu);
> >
> > + atomic_notifier_call_chain(&crash_notifier_list, 0, NULL);
> > +
> > /* Disable VMX or SVM if needed.
> > *
> > * We need to disable virtualization on all CPUs.
> > @@ -88,6 +95,8 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
> >
> > kdump_nmi_shootdown_cpus();
> >
> > + atomic_notifier_call_chain(&crash_notifier_list, 0, NULL);
> > +
> > /* Booting kdump kernel with VMX or SVM enabled won't work,
> > * because (among other limitations) we can't disable paging
> > * with the virt flags.
--
Gleb.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
[not found] ` <20121126172054.GF12969-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2012-11-26 17:43 ` Eric W. Biederman
[not found] ` <87fw3wuuoh.fsf-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Eric W. Biederman @ 2012-11-26 17:43 UTC (permalink / raw)
To: Gleb Natapov
Cc: Marcelo Tosatti, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Zhang Yanfei
Gleb Natapov <gleb-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
> On Mon, Nov 26, 2012 at 09:08:54AM -0600, Eric W. Biederman wrote:
>> Zhang Yanfei <zhangyanfei-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> writes:
>>
>> > This patch adds an atomic notifier list named crash_notifier_list.
>> > Currently, when loading kvm-intel module, a notifier will be registered
>> > in the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
>> > needed.
>>
>> crash_notifier_list ick gag please no. Effectively this makes the kexec
>> on panic code path undebuggable.
>>
>> Instead we need to use direct function calls to whatever you are doing.
>>
> The code walks linked list in kvm-intel module and calls vmclear on
> whatever it finds there. Since the function have to resides in kvm-intel
> module it cannot be called directly. Is callback pointer that is set
> by kvm-intel more acceptable?
Yes a specific callback function is more acceptable. Looking a little
deeper vmclear_local_loaded_vmcss is not particularly acceptable. It is
doing a lot of work that is unnecessary to save the virtual registers
on the kexec on panic path.
In fact I wonder if it might not just be easier to call vmcs_clear to a
fixed per cpu buffer.
Performing list walking in interrupt context without locking in
vmclear_local_loaded vmcss looks a bit scary. Not that locking would
make it any better, as locking would simply add one more way to deadlock
the system. Only an rcu list walk is at all safe. A list walk that
modifies the list as vmclear_local_loaded_vmcss does is definitely not safe.
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
[not found] ` <87fw3wuuoh.fsf-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org>
@ 2012-11-26 17:53 ` Gleb Natapov
2012-11-26 18:18 ` Eric W. Biederman
0 siblings, 1 reply; 14+ messages in thread
From: Gleb Natapov @ 2012-11-26 17:53 UTC (permalink / raw)
To: Eric W. Biederman
Cc: Marcelo Tosatti, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Zhang Yanfei
On Mon, Nov 26, 2012 at 11:43:10AM -0600, Eric W. Biederman wrote:
> Gleb Natapov <gleb-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
>
> > On Mon, Nov 26, 2012 at 09:08:54AM -0600, Eric W. Biederman wrote:
> >> Zhang Yanfei <zhangyanfei-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> writes:
> >>
> >> > This patch adds an atomic notifier list named crash_notifier_list.
> >> > Currently, when loading kvm-intel module, a notifier will be registered
> >> > in the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
> >> > needed.
> >>
> >> crash_notifier_list ick gag please no. Effectively this makes the kexec
> >> on panic code path undebuggable.
> >>
> >> Instead we need to use direct function calls to whatever you are doing.
> >>
> > The code walks linked list in kvm-intel module and calls vmclear on
> > whatever it finds there. Since the function have to resides in kvm-intel
> > module it cannot be called directly. Is callback pointer that is set
> > by kvm-intel more acceptable?
>
> Yes a specific callback function is more acceptable. Looking a little
> deeper vmclear_local_loaded_vmcss is not particularly acceptable. It is
> doing a lot of work that is unnecessary to save the virtual registers
> on the kexec on panic path.
>
What work are you referring to in particular that may not be acceptable?
> In fact I wonder if it might not just be easier to call vmcs_clear to a
> fixed per cpu buffer.
>
There may be more than one vmcs loaded on a cpu, hence the list.
> Performing list walking in interrupt context without locking in
> vmclear_local_loaded vmcss looks a bit scary. Not that locking would
> make it any better, as locking would simply add one more way to deadlock
> the system. Only an rcu list walk is at all safe. A list walk that
> modifies the list as vmclear_local_loaded_vmcss does is definitely not safe.
>
The list vmclear_local_loaded walks is per cpu. Zhang's kvm patch
disables kexec callback while list is modified.
--
Gleb.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
2012-11-26 17:53 ` Gleb Natapov
@ 2012-11-26 18:18 ` Eric W. Biederman
2012-11-27 1:32 ` Zhang Yanfei
0 siblings, 1 reply; 14+ messages in thread
From: Eric W. Biederman @ 2012-11-26 18:18 UTC (permalink / raw)
To: Gleb Natapov
Cc: Zhang Yanfei, x86@kernel.org, kexec@lists.infradead.org,
Marcelo Tosatti, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org
Gleb Natapov <gleb@redhat.com> writes:
> On Mon, Nov 26, 2012 at 11:43:10AM -0600, Eric W. Biederman wrote:
>> Gleb Natapov <gleb@redhat.com> writes:
>>
>> > On Mon, Nov 26, 2012 at 09:08:54AM -0600, Eric W. Biederman wrote:
>> >> Zhang Yanfei <zhangyanfei@cn.fujitsu.com> writes:
>> >>
>> >> > This patch adds an atomic notifier list named crash_notifier_list.
>> >> > Currently, when loading kvm-intel module, a notifier will be registered
>> >> > in the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
>> >> > needed.
>> >>
>> >> crash_notifier_list ick gag please no. Effectively this makes the kexec
>> >> on panic code path undebuggable.
>> >>
>> >> Instead we need to use direct function calls to whatever you are doing.
>> >>
>> > The code walks linked list in kvm-intel module and calls vmclear on
>> > whatever it finds there. Since the function have to resides in kvm-intel
>> > module it cannot be called directly. Is callback pointer that is set
>> > by kvm-intel more acceptable?
>>
>> Yes a specific callback function is more acceptable. Looking a little
>> deeper vmclear_local_loaded_vmcss is not particularly acceptable. It is
>> doing a lot of work that is unnecessary to save the virtual registers
>> on the kexec on panic path.
>>
> What work are you referring to in particular that may not be
> acceptable?
The unnecessary work that I was see is all of the software state
changing. Unlinking things from linked lists flipping variables.
None of that appears related to the fundamental issue saving cpu
state.
Simply reusing a function that does more than what is strictly required
makes me nervous. What is the chance that the function will grow
with maintenance and add constructs that are not safe in a kexec on
panic situtation.
>> In fact I wonder if it might not just be easier to call vmcs_clear to a
>> fixed per cpu buffer.
>>
> There may be more than one vmcs loaded on a cpu, hence the list.
>
>> Performing list walking in interrupt context without locking in
>> vmclear_local_loaded vmcss looks a bit scary. Not that locking would
>> make it any better, as locking would simply add one more way to deadlock
>> the system. Only an rcu list walk is at all safe. A list walk that
>> modifies the list as vmclear_local_loaded_vmcss does is definitely not safe.
>>
> The list vmclear_local_loaded walks is per cpu. Zhang's kvm patch
> disables kexec callback while list is modified.
If the list is only modified on it's cpu and we are running on that cpu
that does look like it will give the necessary protections. It isn't
particularly clear at first glance that is the case unfortunately.
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
2012-11-26 18:18 ` Eric W. Biederman
@ 2012-11-27 1:32 ` Zhang Yanfei
2012-11-27 1:49 ` Eric W. Biederman
0 siblings, 1 reply; 14+ messages in thread
From: Zhang Yanfei @ 2012-11-27 1:32 UTC (permalink / raw)
To: Eric W. Biederman
Cc: Gleb Natapov, x86@kernel.org, kexec@lists.infradead.org,
Marcelo Tosatti, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org
于 2012年11月27日 02:18, Eric W. Biederman 写道:
> Gleb Natapov <gleb@redhat.com> writes:
>
>> On Mon, Nov 26, 2012 at 11:43:10AM -0600, Eric W. Biederman wrote:
>>> Gleb Natapov <gleb@redhat.com> writes:
>>>
>>>> On Mon, Nov 26, 2012 at 09:08:54AM -0600, Eric W. Biederman wrote:
>>>>> Zhang Yanfei <zhangyanfei@cn.fujitsu.com> writes:
>>>>>
>>>>>> This patch adds an atomic notifier list named crash_notifier_list.
>>>>>> Currently, when loading kvm-intel module, a notifier will be registered
>>>>>> in the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
>>>>>> needed.
>>>>>
>>>>> crash_notifier_list ick gag please no. Effectively this makes the kexec
>>>>> on panic code path undebuggable.
>>>>>
>>>>> Instead we need to use direct function calls to whatever you are doing.
>>>>>
>>>> The code walks linked list in kvm-intel module and calls vmclear on
>>>> whatever it finds there. Since the function have to resides in kvm-intel
>>>> module it cannot be called directly. Is callback pointer that is set
>>>> by kvm-intel more acceptable?
>>>
>>> Yes a specific callback function is more acceptable. Looking a little
>>> deeper vmclear_local_loaded_vmcss is not particularly acceptable. It is
>>> doing a lot of work that is unnecessary to save the virtual registers
>>> on the kexec on panic path.
>>>
>> What work are you referring to in particular that may not be
>> acceptable?
>
> The unnecessary work that I was see is all of the software state
> changing. Unlinking things from linked lists flipping variables.
> None of that appears related to the fundamental issue saving cpu
> state.
>
> Simply reusing a function that does more than what is strictly required
> makes me nervous. What is the chance that the function will grow
> with maintenance and add constructs that are not safe in a kexec on
> panic situtation.
So in summary,
1. a specific callback function instead of a notifier?
2. Instead of calling vmclear_local_loaded_vmcss, the vmclear operation
will just call the vmclear on every vmcss loaded on the cpu?
like below:
static void crash_vmclear_local_loaded_vmcss(void)
{
int cpu = raw_smp_processor_id();
struct loaded_vmcs *v, *n;
if (!crash_local_vmclear_enabled(cpu))
return;
list_for_each_entry_safe(v, n, &per_cpu(loaded_vmcss_on_cpu, cpu),
loaded_vmcss_on_cpu_link)
vmcs_clear(v->vmcs);
}
right?
Thanks
Zhang
>
>>> In fact I wonder if it might not just be easier to call vmcs_clear to a
>>> fixed per cpu buffer.
>>>
>> There may be more than one vmcs loaded on a cpu, hence the list.
>>
>>> Performing list walking in interrupt context without locking in
>>> vmclear_local_loaded vmcss looks a bit scary. Not that locking would
>>> make it any better, as locking would simply add one more way to deadlock
>>> the system. Only an rcu list walk is at all safe. A list walk that
>>> modifies the list as vmclear_local_loaded_vmcss does is definitely not safe.
>>>
>> The list vmclear_local_loaded walks is per cpu. Zhang's kvm patch
>> disables kexec callback while list is modified.
>
> If the list is only modified on it's cpu and we are running on that cpu
> that does look like it will give the necessary protections. It isn't
> particularly clear at first glance that is the case unfortunately.
>
> Eric
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
2012-11-27 1:32 ` Zhang Yanfei
@ 2012-11-27 1:49 ` Eric W. Biederman
2012-11-27 1:53 ` Zhang Yanfei
0 siblings, 1 reply; 14+ messages in thread
From: Eric W. Biederman @ 2012-11-27 1:49 UTC (permalink / raw)
To: Zhang Yanfei
Cc: Gleb Natapov, x86@kernel.org, kexec@lists.infradead.org,
Marcelo Tosatti, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org
Zhang Yanfei <zhangyanfei@cn.fujitsu.com> writes:
> So in summary,
>
> 1. a specific callback function instead of a notifier?
Yes.
> 2. Instead of calling vmclear_local_loaded_vmcss, the vmclear operation
> will just call the vmclear on every vmcss loaded on the cpu?
>
> like below:
>
> static void crash_vmclear_local_loaded_vmcss(void)
> {
> int cpu = raw_smp_processor_id();
> struct loaded_vmcs *v, *n;
>
> if (!crash_local_vmclear_enabled(cpu))
> return;
>
> list_for_each_entry_safe(v, n, &per_cpu(loaded_vmcss_on_cpu, cpu),
> loaded_vmcss_on_cpu_link)
> vmcs_clear(v->vmcs);
> }
>
> right?
Yeah that looks good. I would do list_for_each_entry because the list
isn't changing.
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump
2012-11-27 1:49 ` Eric W. Biederman
@ 2012-11-27 1:53 ` Zhang Yanfei
0 siblings, 0 replies; 14+ messages in thread
From: Zhang Yanfei @ 2012-11-27 1:53 UTC (permalink / raw)
To: Eric W. Biederman
Cc: Gleb Natapov, x86@kernel.org, kexec@lists.infradead.org,
Marcelo Tosatti, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org
于 2012年11月27日 09:49, Eric W. Biederman 写道:
> Zhang Yanfei <zhangyanfei@cn.fujitsu.com> writes:
>
>> So in summary,
>>
>> 1. a specific callback function instead of a notifier?
>
> Yes.
>
>> 2. Instead of calling vmclear_local_loaded_vmcss, the vmclear operation
>> will just call the vmclear on every vmcss loaded on the cpu?
>>
>> like below:
>>
>> static void crash_vmclear_local_loaded_vmcss(void)
>> {
>> int cpu = raw_smp_processor_id();
>> struct loaded_vmcs *v, *n;
>>
>> if (!crash_local_vmclear_enabled(cpu))
>> return;
>>
>> list_for_each_entry_safe(v, n, &per_cpu(loaded_vmcss_on_cpu, cpu),
>> loaded_vmcss_on_cpu_link)
>> vmcs_clear(v->vmcs);
>> }
>>
>> right?
>
> Yeah that looks good. I would do list_for_each_entry because the list
> isn't changing.
OK.
I will update the patch and resend it.
Zhang
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2012-11-27 1:53 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-22 8:22 [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary Zhang Yanfei
[not found] ` <50ADE0C2.1000106-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2012-11-22 8:23 ` [PATCH v8 1/2] x86/kexec: add a new atomic notifier list for kdump Zhang Yanfei
2012-11-26 15:08 ` Eric W. Biederman
[not found] ` <87ip8sxuyh.fsf-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org>
2012-11-26 17:20 ` Gleb Natapov
[not found] ` <20121126172054.GF12969-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-11-26 17:43 ` Eric W. Biederman
[not found] ` <87fw3wuuoh.fsf-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org>
2012-11-26 17:53 ` Gleb Natapov
2012-11-26 18:18 ` Eric W. Biederman
2012-11-27 1:32 ` Zhang Yanfei
2012-11-27 1:49 ` Eric W. Biederman
2012-11-27 1:53 ` Zhang Yanfei
2012-11-22 8:25 ` [PATCH v8 2/2] KVM-INTEL: add a notifier and a bitmap to support VMCLEAR in kdump Zhang Yanfei
2012-11-25 14:26 ` [PATCH v8 0/2] x86: vmclear vmcss on all cpus when doing kdump if necessary Gleb Natapov
[not found] ` <20121125142642.GC25516-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-11-26 1:50 ` Zhang Yanfei
2012-11-26 1:55 ` Zhang Yanfei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).