virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v2 02/11] x86/kexec: Add extra pointers to transition page table PGD, PUD, PMD and PTE
       [not found]   ` <1353423893-23125-3-git-send-email-daniel.kiper@oracle.com>
@ 2012-11-20 15:52     ` Jan Beulich
  0 siblings, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2012-11-20 15:52 UTC (permalink / raw)
  To: Daniel Kiper
  Cc: xen-devel, konrad.wilk, andrew.cooper3, x86, kexec, linux-kernel,
	virtualization, mingo, ebiederm, hpa, tglx

>>> On 20.11.12 at 16:04, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> Some implementations (e.g. Xen PVOPS) could not use part of identity page 
> table
> to construct transition page table. It means that they require separate 
> PUDs,
> PMDs and PTEs for virtual and physical (identity) mapping. To satisfy that
> requirement add extra pointer to PGD, PUD, PMD and PTE and align existing 
> code.

As said for v1 already - this is not really a requirement of the
interface, or else none of our Xen kernels since 2.6.30 would
have worked. I don't think it is desirable to introduce overhead
for everyone if it's not even needed for Xen.

Jan

> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> ---
>  arch/x86/include/asm/kexec.h       |   10 +++++++---
>  arch/x86/kernel/machine_kexec_64.c |   12 ++++++------
>  2 files changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
> index 317ff17..3cf5600 100644
> --- a/arch/x86/include/asm/kexec.h
> +++ b/arch/x86/include/asm/kexec.h
> @@ -157,9 +157,13 @@ struct kimage_arch {
>  };
>  #else
>  struct kimage_arch {
> -	pud_t *pud;
> -	pmd_t *pmd;
> -	pte_t *pte;
> +	pgd_t *pgd;
> +	pud_t *pud0;
> +	pud_t *pud1;
> +	pmd_t *pmd0;
> +	pmd_t *pmd1;
> +	pte_t *pte0;
> +	pte_t *pte1;
>  };
>  #endif
>  
> diff --git a/arch/x86/kernel/machine_kexec_64.c 
> b/arch/x86/kernel/machine_kexec_64.c
> index b3ea9db..976e54b 100644
> --- a/arch/x86/kernel/machine_kexec_64.c
> +++ b/arch/x86/kernel/machine_kexec_64.c
> @@ -137,9 +137,9 @@ out:
>  
>  static void free_transition_pgtable(struct kimage *image)
>  {
> -	free_page((unsigned long)image->arch.pud);
> -	free_page((unsigned long)image->arch.pmd);
> -	free_page((unsigned long)image->arch.pte);
> +	free_page((unsigned long)image->arch.pud0);
> +	free_page((unsigned long)image->arch.pmd0);
> +	free_page((unsigned long)image->arch.pte0);
>  }
>  
>  static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
> @@ -157,7 +157,7 @@ static int init_transition_pgtable(struct kimage *image, 
> pgd_t *pgd)
>  		pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
>  		if (!pud)
>  			goto err;
> -		image->arch.pud = pud;
> +		image->arch.pud0 = pud;
>  		set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
>  	}
>  	pud = pud_offset(pgd, vaddr);
> @@ -165,7 +165,7 @@ static int init_transition_pgtable(struct kimage *image, 
> pgd_t *pgd)
>  		pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
>  		if (!pmd)
>  			goto err;
> -		image->arch.pmd = pmd;
> +		image->arch.pmd0 = pmd;
>  		set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
>  	}
>  	pmd = pmd_offset(pud, vaddr);
> @@ -173,7 +173,7 @@ static int init_transition_pgtable(struct kimage *image, 
> pgd_t *pgd)
>  		pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
>  		if (!pte)
>  			goto err;
> -		image->arch.pte = pte;
> +		image->arch.pte0 = pte;
>  		set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
>  	}
>  	pte = pte_offset_kernel(pmd, vaddr);
> -- 
> 1.5.6.5

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found] ` <1353423893-23125-2-git-send-email-daniel.kiper@oracle.com>
       [not found]   ` <1353423893-23125-3-git-send-email-daniel.kiper@oracle.com>
@ 2012-11-20 16:40   ` Eric W. Biederman
       [not found]     ` <20121121105221.GA2925@host-192-168-1-59.local.net-space.pl>
  1 sibling, 1 reply; 23+ messages in thread
From: Eric W. Biederman @ 2012-11-20 16:40 UTC (permalink / raw)
  To: Daniel Kiper
  Cc: xen-devel, konrad.wilk, andrew.cooper3, x86, kexec, linux-kernel,
	virtualization, mingo, jbeulich, hpa, tglx

Daniel Kiper <daniel.kiper@oracle.com> writes:

> Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
> functions or require some changes in behavior of kexec/kdump generic code.
> To cope with that problem kexec_ops struct was introduced. It allows
> a developer to replace all or some functions and control some
> functionality of kexec/kdump generic code.
>
> Default behavior of kexec/kdump generic code is not changed.

Ick.

> v2 - suggestions/fixes:
>    - add comment for kexec_ops.crash_alloc_temp_store member
>      (suggested by Konrad Rzeszutek Wilk),
>    - simplify kexec_ops usage
>      (suggested by Konrad Rzeszutek Wilk).
>
> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> ---
>  include/linux/kexec.h |   26 ++++++++++
>  kernel/kexec.c        |  131 +++++++++++++++++++++++++++++++++++++------------
>  2 files changed, 125 insertions(+), 32 deletions(-)
>
> diff --git a/include/linux/kexec.h b/include/linux/kexec.h
> index d0b8458..c8d0b35 100644
> --- a/include/linux/kexec.h
> +++ b/include/linux/kexec.h
> @@ -116,7 +116,33 @@ struct kimage {
>  #endif
>  };
>  
> +struct kexec_ops {
> +	/*
> +	 * Some kdump implementations (e.g. Xen PVOPS dom0) could not access
> +	 * directly crash kernel memory area. In this situation they must
> +	 * allocate memory outside of it and later move contents from temporary
> +	 * storage to final resting places (usualy done by relocate_kernel()).
> +	 * Such behavior could be enforced by setting
> +	 * crash_alloc_temp_store member to true.
> +	 */

Why in the world would Xen not be able to access crash kernel memory?
As currently defined it is normal memory that the kernel chooses not to
use.

If relocate kernel can access that memory you definitely can access the
memory so the comment does not make any sense.

> +	bool crash_alloc_temp_store;
> +	struct page *(*kimage_alloc_pages)(gfp_t gfp_mask,
> +						unsigned int order,
> +						unsigned long limit);
> +	void (*kimage_free_pages)(struct page *page);
> +	unsigned long (*page_to_pfn)(struct page *page);
> +	struct page *(*pfn_to_page)(unsigned long pfn);
> +	unsigned long (*virt_to_phys)(volatile void *address);
> +	void *(*phys_to_virt)(unsigned long address);
> +	int (*machine_kexec_prepare)(struct kimage *image);
> +	int (*machine_kexec_load)(struct kimage *image);
> +	void (*machine_kexec_cleanup)(struct kimage *image);
> +	void (*machine_kexec_unload)(struct kimage *image);
> +	void (*machine_kexec_shutdown)(void);
> +	void (*machine_kexec)(struct kimage *image);
> +};

Ugh.  This is a nasty abstraction.

You are mixing and matching a bunch of things together here.

If you need to override machine_kexec_xxx please do that on a per
architecture basis.

Special case overrides of page_to_pfn, pfn_to_page, virt_to_phys,
phys_to_virt, and friends seem completely inappropriate.

There may be a point to all of these but you are mixing and matching
things badly.


Eric

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]     ` <20121121105221.GA2925@host-192-168-1-59.local.net-space.pl>
@ 2012-11-22 12:15       ` Eric W. Biederman
  2012-11-22 17:37         ` H. Peter Anvin
                           ` (4 more replies)
  0 siblings, 5 replies; 23+ messages in thread
From: Eric W. Biederman @ 2012-11-22 12:15 UTC (permalink / raw)
  To: Daniel Kiper
  Cc: xen-devel, konrad.wilk, andrew.cooper3, x86, kexec, linux-kernel,
	virtualization, mingo, jbeulich, hpa, tglx

Daniel Kiper <daniel.kiper@oracle.com> writes:

> On Tue, Nov 20, 2012 at 08:40:39AM -0800, ebiederm@xmission.com wrote:
>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>
>> > Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
>> > functions or require some changes in behavior of kexec/kdump generic code.
>> > To cope with that problem kexec_ops struct was introduced. It allows
>> > a developer to replace all or some functions and control some
>> > functionality of kexec/kdump generic code.
>> >
>> > Default behavior of kexec/kdump generic code is not changed.
>>
>> Ick.
>>
>> > v2 - suggestions/fixes:
>> >    - add comment for kexec_ops.crash_alloc_temp_store member
>> >      (suggested by Konrad Rzeszutek Wilk),
>> >    - simplify kexec_ops usage
>> >      (suggested by Konrad Rzeszutek Wilk).
>> >
>> > Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
>> > ---
>> >  include/linux/kexec.h |   26 ++++++++++
>> >  kernel/kexec.c        |  131 +++++++++++++++++++++++++++++++++++++------------
>> >  2 files changed, 125 insertions(+), 32 deletions(-)
>> >
>> > diff --git a/include/linux/kexec.h b/include/linux/kexec.h
>> > index d0b8458..c8d0b35 100644
>> > --- a/include/linux/kexec.h
>> > +++ b/include/linux/kexec.h
>> > @@ -116,7 +116,33 @@ struct kimage {
>> >  #endif
>> >  };
>> >
>> > +struct kexec_ops {
>> > +	/*
>> > +	 * Some kdump implementations (e.g. Xen PVOPS dom0) could not access
>> > +	 * directly crash kernel memory area. In this situation they must
>> > +	 * allocate memory outside of it and later move contents from temporary
>> > +	 * storage to final resting places (usualy done by relocate_kernel()).
>> > +	 * Such behavior could be enforced by setting
>> > +	 * crash_alloc_temp_store member to true.
>> > +	 */
>>
>> Why in the world would Xen not be able to access crash kernel memory?
>> As currently defined it is normal memory that the kernel chooses not to
>> use.
>>
>> If relocate kernel can access that memory you definitely can access the
>> memory so the comment does not make any sense.
>
> Crash kernel memory is reserved by Xen hypervisor and Xen hypervisor
> only has access to it. dom0 does not have any mapping of this area.
> However, relocate_kernel() has access to crash kernel memory
> because it is executed by Xen hypervisor and whole machine
> memory is identity mapped.

This is all weird.  Doubly so since this code is multi-arch and you have
a set of requirements no other arch has had.

I recall that Xen uses kexec in a unique manner.  What is the hypervisor
interface and how is it used?

Is this for when the hypervisor crashes and we want a crash dump of
that?



>> > +	bool crash_alloc_temp_store;
>> > +	struct page *(*kimage_alloc_pages)(gfp_t gfp_mask,
>> > +						unsigned int order,
>> > +						unsigned long limit);
>> > +	void (*kimage_free_pages)(struct page *page);
>> > +	unsigned long (*page_to_pfn)(struct page *page);
>> > +	struct page *(*pfn_to_page)(unsigned long pfn);
>> > +	unsigned long (*virt_to_phys)(volatile void *address);
>> > +	void *(*phys_to_virt)(unsigned long address);
>> > +	int (*machine_kexec_prepare)(struct kimage *image);
>> > +	int (*machine_kexec_load)(struct kimage *image);
>> > +	void (*machine_kexec_cleanup)(struct kimage *image);
>> > +	void (*machine_kexec_unload)(struct kimage *image);
>> > +	void (*machine_kexec_shutdown)(void);
>> > +	void (*machine_kexec)(struct kimage *image);
>> > +};
>>
>> Ugh.  This is a nasty abstraction.
>>
>> You are mixing and matching a bunch of things together here.
>>
>> If you need to override machine_kexec_xxx please do that on a per
>> architecture basis.
>
> Yes, it is possible but I think that it is worth to do it at that
> level because it could be useful for other archs too (e.g. Xen ARM port
> is under development). Then we do not need to duplicate that functionality
> in arch code. Additionally, Xen requires machine_kexec_load and
> machine_kexec_unload hooks which are not available in current generic
> kexec/kdump code.


Let me be clear.  kexec_ops as you have implemented it is absolutely
unacceptable.

Your kexec_ops is not an abstraction but a hack that enshrines in stone
implementation details.

>> Special case overrides of page_to_pfn, pfn_to_page, virt_to_phys,
>> phys_to_virt, and friends seem completely inappropriate.
>
> They are required in Xen PVOPS case. If we do not do that in that way
> then we at least need to duplicate almost all generic kexec/kdump existing
> code in arch depended files. I do not mention that we need to capture
> relevant syscall and other things. I think that this is wrong way.

A different definition of phys_to_virt and page_to_pfn for one specific
function is total nonsense.

It may actually be better to have a completely different code path.
This looks more like code abuse than code reuse.

Successful code reuse depends upon not breaking the assumptions on which
the code relies, or modifying the code so that the new modified
assumptions are clear.  In this case you might as well define up as down
for all of the sense kexec_ops makes.

>> There may be a point to all of these but you are mixing and matching
>> things badly.
>
> Do you whish to split this kexec_ops struct to something which
> works with addresses and something which is reponsible for
> loading, unloading and executing kexec/kdump? I am able to change
> that but I would like to know a bit about your vision first.

My vision is that we should have code that makes sense.

My suspicion is that what you want is a cousin of the existing kexec
system call.  Perhaps what is needed is a flag to say use the firmware
kexec system call.

I absolutely do not understand what Xen is trying to do.  kexec by
design should not require any firmware specific hooks.  kexec at this
level should only need to care about the processor architeture.  Clearly
what you are doing with Xen requires special hooks separate even from
the normal paravirt hooks.  So I do not understand you are trying to do.

It needs to be clear from the code what is happening differently in the
Xen case.  Otherwise the code is unmaintainable as no one will be able
to understand it.

Eric

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
  2012-11-22 12:15       ` Eric W. Biederman
@ 2012-11-22 17:37         ` H. Peter Anvin
  2012-11-23  9:56           ` Jan Beulich
  2012-11-22 17:47         ` H. Peter Anvin
                           ` (3 subsequent siblings)
  4 siblings, 1 reply; 23+ messages in thread
From: H. Peter Anvin @ 2012-11-22 17:37 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: xen-devel, konrad.wilk, andrew.cooper3, Daniel Kiper, x86, kexec,
	linux-kernel, virtualization, mingo, jbeulich, tglx

On 11/22/2012 04:15 AM, Eric W. Biederman wrote:
>
> Let me be clear.  kexec_ops as you have implemented it is absolutely
> unacceptable.
>
> Your kexec_ops is not an abstraction but a hack that enshrines in stone
> implementation details.
>

This is the kind of stuff that is absolutely endemic to the Xen 
endeavour, and which is why Xen is such a disease.  The design principle 
seems to have been "hey, let's go and replace random Linux kernel 
internals with our own stuff, and make them ABIs, so that they can never 
change.  Oh, and let's not bother documenting the constraints we're 
imposing, that might make the code manageable."

I actually talked to Ian Jackson at LCE, and mentioned among other 
things the bogosity of requiring a PUD page for three-level paging in 
Linux -- a bogosity which has spread from Xen into native.  It's a page 
wasted for no good reason, since it only contains 32 bytes worth of 
data, *inherently*.  Furthermore, contrary to popular belief, it is 
*not* pa page table per se.

Ian told me: "I didn't know we did that, and we shouldn't have to." 
Here we have suffered this overhead for at least six years, because *XEN 
FUCKED UP AND NOONE ELSE HAD ANY WAY OF KNOWING THAT*.

Now we know that it can "maybe"(!!!) be fixed, if we are willing to 
spend time working on a dying platform, whereas we have already suffered 
the damage during the height of its importance.

	-hpa


-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
  2012-11-22 12:15       ` Eric W. Biederman
  2012-11-22 17:37         ` H. Peter Anvin
@ 2012-11-22 17:47         ` H. Peter Anvin
       [not found]         ` <50AE6542.3020302@zytor.com>
                           ` (2 subsequent siblings)
  4 siblings, 0 replies; 23+ messages in thread
From: H. Peter Anvin @ 2012-11-22 17:47 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: xen-devel, konrad.wilk, andrew.cooper3, Daniel Kiper, x86, kexec,
	linux-kernel, virtualization, mingo, jbeulich, tglx

The other thing that should be considered here is how utterly 
preposterous the notion of doing in-guest crash dumping is in a system 
that contains a hypervisor.  The reason for kdump is that on bare metal 
there are no other options, but in a hypervisor system the right thing 
should be for the hypervisor to do the dump (possibly spawning a clean 
I/O domain if the I/O domain is necessary to access the media.)

There is absolutely no reason to have a crashkernel sitting around in 
each guest, consuming memory, and possibly get corrupt.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]         ` <50AE6542.3020302@zytor.com>
@ 2012-11-22 18:07           ` Andrew Cooper
       [not found]           ` <50AE69EF.4060909@citrix.com>
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Andrew Cooper @ 2012-11-22 18:07 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, tglx@linutronix.de

On 22/11/12 17:47, H. Peter Anvin wrote:
> The other thing that should be considered here is how utterly 
> preposterous the notion of doing in-guest crash dumping is in a system 
> that contains a hypervisor.  The reason for kdump is that on bare metal 
> there are no other options, but in a hypervisor system the right thing 
> should be for the hypervisor to do the dump (possibly spawning a clean 
> I/O domain if the I/O domain is necessary to access the media.)
>
> There is absolutely no reason to have a crashkernel sitting around in 
> each guest, consuming memory, and possibly get corrupt.
>
> 	-hpa
>

I agree that regular guests should not be using the kexec/kdump. 
However, this patch series is required for allowing a pvops kernel to be
a crash kernel for Xen, which is very important from dom0/Xen's point of
view.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]           ` <50AE69EF.4060909@citrix.com>
@ 2012-11-22 22:26             ` H. Peter Anvin
       [not found]             ` <09b41677-c9e7-4cd4-84a0-a1cb483d551d@email.android.com>
  1 sibling, 0 replies; 23+ messages in thread
From: H. Peter Anvin @ 2012-11-22 22:26 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, tglx@linutronix.de

Bullshit.  This should be a separate domain.

Andrew Cooper <andrew.cooper3@citrix.com> wrote:

>On 22/11/12 17:47, H. Peter Anvin wrote:
>> The other thing that should be considered here is how utterly 
>> preposterous the notion of doing in-guest crash dumping is in a
>system 
>> that contains a hypervisor.  The reason for kdump is that on bare
>metal 
>> there are no other options, but in a hypervisor system the right
>thing 
>> should be for the hypervisor to do the dump (possibly spawning a
>clean 
>> I/O domain if the I/O domain is necessary to access the media.)
>>
>> There is absolutely no reason to have a crashkernel sitting around in
>
>> each guest, consuming memory, and possibly get corrupt.
>>
>> 	-hpa
>>
>
>I agree that regular guests should not be using the kexec/kdump. 
>However, this patch series is required for allowing a pvops kernel to
>be
>a crash kernel for Xen, which is very important from dom0/Xen's point
>of
>view.

-- 
Sent from my mobile phone. Please excuse brevity and lack of formatting.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]         ` <50AE6542.3020302@zytor.com>
  2012-11-22 18:07           ` Andrew Cooper
       [not found]           ` <50AE69EF.4060909@citrix.com>
@ 2012-11-23  0:12           ` Andrew Cooper
       [not found]           ` <50AEBF86.50501@citrix.com>
  3 siblings, 0 replies; 23+ messages in thread
From: Andrew Cooper @ 2012-11-23  0:12 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, tglx@linutronix.de

On 22/11/2012 17:47, H. Peter Anvin wrote:
> The other thing that should be considered here is how utterly 
> preposterous the notion of doing in-guest crash dumping is in a system 
> that contains a hypervisor.  The reason for kdump is that on bare metal 
> there are no other options, but in a hypervisor system the right thing 
> should be for the hypervisor to do the dump (possibly spawning a clean 
> I/O domain if the I/O domain is necessary to access the media.)
>
> There is absolutely no reason to have a crashkernel sitting around in 
> each guest, consuming memory, and possibly get corrupt.
>
> 	-hpa
>

(Your reply to my email which I can see on the xen devel archive appears
to have gotten lost somewhere inside the citrix email system, so
apologies for replying out of order)

The kdump kernel loaded by dom0 is for when Xen crashes, not for when
dom0 crashes (although a dom0 crash does admittedly lead to a Xen crash)

There is no possible way it could be a separate domain; Xen completely
ceases to function as soon as jumps to the entry point of the kdump image.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]           ` <50AEBF86.50501@citrix.com>
@ 2012-11-23  1:34             ` H. Peter Anvin
  2012-11-23  1:38             ` H. Peter Anvin
  1 sibling, 0 replies; 23+ messages in thread
From: H. Peter Anvin @ 2012-11-23  1:34 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, tglx@linutronix.de

Ok... that *sort of* makes sense, but also underscores how utterly different this is from a normal kexec.

Andrew Cooper <andrew.cooper3@citrix.com> wrote:

>On 22/11/2012 17:47, H. Peter Anvin wrote:
>> The other thing that should be considered here is how utterly 
>> preposterous the notion of doing in-guest crash dumping is in a
>system 
>> that contains a hypervisor.  The reason for kdump is that on bare
>metal 
>> there are no other options, but in a hypervisor system the right
>thing 
>> should be for the hypervisor to do the dump (possibly spawning a
>clean 
>> I/O domain if the I/O domain is necessary to access the media.)
>>
>> There is absolutely no reason to have a crashkernel sitting around in
>
>> each guest, consuming memory, and possibly get corrupt.
>>
>> 	-hpa
>>
>
>(Your reply to my email which I can see on the xen devel archive
>appears
>to have gotten lost somewhere inside the citrix email system, so
>apologies for replying out of order)
>
>The kdump kernel loaded by dom0 is for when Xen crashes, not for when
>dom0 crashes (although a dom0 crash does admittedly lead to a Xen
>crash)
>
>There is no possible way it could be a separate domain; Xen completely
>ceases to function as soon as jumps to the entry point of the kdump
>image.
>
>~Andrew

-- 
Sent from my mobile phone. Please excuse brevity and lack of formatting.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]           ` <50AEBF86.50501@citrix.com>
  2012-11-23  1:34             ` H. Peter Anvin
@ 2012-11-23  1:38             ` H. Peter Anvin
  2012-11-23  1:56               ` Andrew Cooper
       [not found]               ` <50AED7B8.7040902@citrix.com>
  1 sibling, 2 replies; 23+ messages in thread
From: H. Peter Anvin @ 2012-11-23  1:38 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, tglx@linutronix.de

I still don't really get why it can't be isolated from dom0, which would make more sense to me, even for a Xen crash.

Andrew Cooper <andrew.cooper3@citrix.com> wrote:

>On 22/11/2012 17:47, H. Peter Anvin wrote:
>> The other thing that should be considered here is how utterly 
>> preposterous the notion of doing in-guest crash dumping is in a
>system 
>> that contains a hypervisor.  The reason for kdump is that on bare
>metal 
>> there are no other options, but in a hypervisor system the right
>thing 
>> should be for the hypervisor to do the dump (possibly spawning a
>clean 
>> I/O domain if the I/O domain is necessary to access the media.)
>>
>> There is absolutely no reason to have a crashkernel sitting around in
>
>> each guest, consuming memory, and possibly get corrupt.
>>
>> 	-hpa
>>
>
>(Your reply to my email which I can see on the xen devel archive
>appears
>to have gotten lost somewhere inside the citrix email system, so
>apologies for replying out of order)
>
>The kdump kernel loaded by dom0 is for when Xen crashes, not for when
>dom0 crashes (although a dom0 crash does admittedly lead to a Xen
>crash)
>
>There is no possible way it could be a separate domain; Xen completely
>ceases to function as soon as jumps to the entry point of the kdump
>image.
>
>~Andrew

-- 
Sent from my mobile phone. Please excuse brevity and lack of formatting.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
  2012-11-23  1:38             ` H. Peter Anvin
@ 2012-11-23  1:56               ` Andrew Cooper
       [not found]               ` <50AED7B8.7040902@citrix.com>
  1 sibling, 0 replies; 23+ messages in thread
From: Andrew Cooper @ 2012-11-23  1:56 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, tglx@linutronix.de

On 23/11/2012 01:38, H. Peter Anvin wrote:
> I still don't really get why it can't be isolated from dom0, which would make more sense to me, even for a Xen crash.
>

The crash region (as specified by crashkernel= on the Xen command line)
is isolated from dom0.

dom0 (using the kexec utility etc) has the task of locating the Xen
crash notes (using the kexec hypercall interface), constructing a binary
blob containing kernel, initram and gubbins, and asking Xen to put this
blob in the crash region (again, using the kexec hypercall interface).

I do not see how this is very much different from the native case
currently (although please correct me if I am misinformed).  Linux has
extra work to do by populating /proc/iomem with the Xen crash regions
boot (so the kexec utility can reference their physical addresses when
constructing the blob), and should just act as a conduit between the
kexec system call and the kexec hypercall to load the blob.

For within-guest kexec/kdump functionality, I agree that it is barking
mad.  However, we do see cloud operators interested in the idea so VM
administrators can look after their crashes themselves.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]               ` <50AED7B8.7040902@citrix.com>
@ 2012-11-23  8:01                 ` Bouchard Louis
  2012-11-23  9:53                 ` Jan Beulich
       [not found]                 ` <50AF55B102000078000AABF3@nat28.tlf.novell.com>
  2 siblings, 0 replies; 23+ messages in thread
From: Bouchard Louis @ 2012-11-23  8:01 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, H. Peter Anvin,
	tglx@linutronix.de

Hi,

Le 23/11/2012 02:56, Andrew Cooper a écrit :
> For within-guest kexec/kdump functionality, I agree that it is barking
> mad.  However, we do see cloud operators interested in the idea so VM
> administrators can look after their crashes themselves.

It's not "barking mad" when your dayjob is to investigate and fix other
people's kernel problems.  Right now, it's impossible to get a kernel
image of a failing EC2 instance, so every time someone shows up with a
"my kernel crashes in my instance", we're lest with mostly unusable
backtraces and oops messages.

When I'm able to reproduce someone's kernel panic, I'm quite happy to be
able to use virtualization to run a kernel dump analysis on a locally
reproduced context.

It's also quite useful when packaging things like makedumpfile,
kdump-tools to be able to avoid having to rely on bare metal to test new
releases. So yes, in theory it may look barking mad, but real life is
somewhat different.

Kind regards,

...Louis
-- 
Louis Bouchard
Backline Support Analyst
Canonical Ltd
Ubuntu support: http://landscape.canonical.com

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
  2012-11-22 12:15       ` Eric W. Biederman
                           ` (2 preceding siblings ...)
       [not found]         ` <50AE6542.3020302@zytor.com>
@ 2012-11-23  9:47         ` Daniel Kiper
       [not found]         ` <20121123094516.GA2921@host-192-168-1-59.local.net-space.pl>
  4 siblings, 0 replies; 23+ messages in thread
From: Daniel Kiper @ 2012-11-23  9:47 UTC (permalink / raw)
  To: ebiederm
  Cc: xen-devel, konrad.wilk, andrew.cooper3, x86, kexec, linux-kernel,
	virtualization, mingo, jbeulich, hpa, tglx

On Thu, Nov 22, 2012 at 04:15:48AM -0800, ebiederm@xmission.com wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
>
> > On Tue, Nov 20, 2012 at 08:40:39AM -0800, ebiederm@xmission.com wrote:
> >> Daniel Kiper <daniel.kiper@oracle.com> writes:
> >>
> >> > Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
> >> > functions or require some changes in behavior of kexec/kdump generic code.
> >> > To cope with that problem kexec_ops struct was introduced. It allows
> >> > a developer to replace all or some functions and control some
> >> > functionality of kexec/kdump generic code.
> >> >
> >> > Default behavior of kexec/kdump generic code is not changed.
> >>
> >> Ick.
> >>
> >> > v2 - suggestions/fixes:
> >> >    - add comment for kexec_ops.crash_alloc_temp_store member
> >> >      (suggested by Konrad Rzeszutek Wilk),
> >> >    - simplify kexec_ops usage
> >> >      (suggested by Konrad Rzeszutek Wilk).
> >> >
> >> > Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> >> > ---
> >> >  include/linux/kexec.h |   26 ++++++++++
> >> >  kernel/kexec.c        |  131 +++++++++++++++++++++++++++++++++++++------------
> >> >  2 files changed, 125 insertions(+), 32 deletions(-)
> >> >
> >> > diff --git a/include/linux/kexec.h b/include/linux/kexec.h
> >> > index d0b8458..c8d0b35 100644
> >> > --- a/include/linux/kexec.h
> >> > +++ b/include/linux/kexec.h
> >> > @@ -116,7 +116,33 @@ struct kimage {
> >> >  #endif
> >> >  };
> >> >
> >> > +struct kexec_ops {
> >> > +	/*
> >> > +	 * Some kdump implementations (e.g. Xen PVOPS dom0) could not access
> >> > +	 * directly crash kernel memory area. In this situation they must
> >> > +	 * allocate memory outside of it and later move contents from temporary
> >> > +	 * storage to final resting places (usualy done by relocate_kernel()).
> >> > +	 * Such behavior could be enforced by setting
> >> > +	 * crash_alloc_temp_store member to true.
> >> > +	 */
> >>
> >> Why in the world would Xen not be able to access crash kernel memory?
> >> As currently defined it is normal memory that the kernel chooses not to
> >> use.
> >>
> >> If relocate kernel can access that memory you definitely can access the
> >> memory so the comment does not make any sense.
> >
> > Crash kernel memory is reserved by Xen hypervisor and Xen hypervisor
> > only has access to it. dom0 does not have any mapping of this area.
> > However, relocate_kernel() has access to crash kernel memory
> > because it is executed by Xen hypervisor and whole machine
> > memory is identity mapped.
>
> This is all weird.  Doubly so since this code is multi-arch and you have
> a set of requirements no other arch has had.
>
> I recall that Xen uses kexec in a unique manner.  What is the hypervisor
> interface and how is it used?
>
> Is this for when the hypervisor crashes and we want a crash dump of
> that?

dom0 at boot gets some info about kexec/kdump configuration from Xen hypervisor
(e.g. placement of crash kernel area). Later if you call kexec syscall most
things are done in the same way as on baremetal. However, after placing image
in memory, HYPERVISOR_kexec_op() hypercall must be called to inform hypervisor
that image is loaded (new hook machine_kexec_load is used for this;
machine_kexec_unload is used for unload). Then Xen establishes fixmap for pages
found in page_list[] and returns control to dom0. If dom0 crashes or "kexec execute"
is used by user then dom0 calls HYPERVISOR_kexec_op() to instruct hypervisor that
kexec/kdump image should be executed immediately. Xen calls relocate_kernel()
and all things runs as usual.

> >> > +	bool crash_alloc_temp_store;
> >> > +	struct page *(*kimage_alloc_pages)(gfp_t gfp_mask,
> >> > +						unsigned int order,
> >> > +						unsigned long limit);
> >> > +	void (*kimage_free_pages)(struct page *page);
> >> > +	unsigned long (*page_to_pfn)(struct page *page);
> >> > +	struct page *(*pfn_to_page)(unsigned long pfn);
> >> > +	unsigned long (*virt_to_phys)(volatile void *address);
> >> > +	void *(*phys_to_virt)(unsigned long address);
> >> > +	int (*machine_kexec_prepare)(struct kimage *image);
> >> > +	int (*machine_kexec_load)(struct kimage *image);
> >> > +	void (*machine_kexec_cleanup)(struct kimage *image);
> >> > +	void (*machine_kexec_unload)(struct kimage *image);
> >> > +	void (*machine_kexec_shutdown)(void);
> >> > +	void (*machine_kexec)(struct kimage *image);
> >> > +};
> >>
> >> Ugh.  This is a nasty abstraction.
> >>
> >> You are mixing and matching a bunch of things together here.
> >>
> >> If you need to override machine_kexec_xxx please do that on a per
> >> architecture basis.
> >
> > Yes, it is possible but I think that it is worth to do it at that
> > level because it could be useful for other archs too (e.g. Xen ARM port
> > is under development). Then we do not need to duplicate that functionality
> > in arch code. Additionally, Xen requires machine_kexec_load and
> > machine_kexec_unload hooks which are not available in current generic
> > kexec/kdump code.
>
>
> Let me be clear.  kexec_ops as you have implemented it is absolutely
> unacceptable.
>
> Your kexec_ops is not an abstraction but a hack that enshrines in stone
> implementation details.

Roger.

> >> Special case overrides of page_to_pfn, pfn_to_page, virt_to_phys,
> >> phys_to_virt, and friends seem completely inappropriate.
> >
> > They are required in Xen PVOPS case. If we do not do that in that way
> > then we at least need to duplicate almost all generic kexec/kdump existing
> > code in arch depended files. I do not mention that we need to capture
> > relevant syscall and other things. I think that this is wrong way.
>
> A different definition of phys_to_virt and page_to_pfn for one specific
> function is total nonsense.
>
> It may actually be better to have a completely different code path.
> This looks more like code abuse than code reuse.
>
> Successful code reuse depends upon not breaking the assumptions on which
> the code relies, or modifying the code so that the new modified
> assumptions are clear.  In this case you might as well define up as down
> for all of the sense kexec_ops makes.

Hmmm... Well, problem with above mentioned functions is that they work
on physical addresses. In Xen PVOPS (currently dom0 is PVOPS) they
are useless in kexec/kdump case. It means that physical addresses
must be converted to/from machine addresses which has a real meaning
in Xen PVOPS case. That is why those funtions were introduced.

> >> There may be a point to all of these but you are mixing and matching
> >> things badly.
> >
> > Do you whish to split this kexec_ops struct to something which
> > works with addresses and something which is reponsible for
> > loading, unloading and executing kexec/kdump? I am able to change
> > that but I would like to know a bit about your vision first.
>
> My vision is that we should have code that makes sense.
>
> My suspicion is that what you want is a cousin of the existing kexec
> system call.  Perhaps what is needed is a flag to say use the firmware
> kexec system call.
>
> I absolutely do not understand what Xen is trying to do.  kexec by
> design should not require any firmware specific hooks.  kexec at this
> level should only need to care about the processor architeture.  Clearly
> what you are doing with Xen requires special hooks separate even from
> the normal paravirt hooks.  So I do not understand you are trying to do.
>
> It needs to be clear from the code what is happening differently in the
> Xen case.  Otherwise the code is unmaintainable as no one will be able
> to understand it.

I agree. I could remove all machine_* hooks from kexec_ops and call Xen
specific functions from arch files. However, I need to add two new
machine calls, machine_kexec_load and machine_kexec_unload, in the same
manner as existing machine_* calls. In general they could be used to inform
firmware (in this case Xen) that kexec/kdump image is loaded.

kimage_alloc_pages, kimage_free_pages, page_to_pfn, pfn_to_page, virt_to_phys
and phys_to_virt are worse. If we could not find good solution how to replace
them then we end up with calling Xen specific version of kexec/kdump which
would contain nearly full copy of exisiting kexec/kdump code. Not good.

We could add some code to kernel/kexec.c which depends on CONFIG_XEN.
It could contain above mentioned functions which later will be called
by existing kexec code. This is not nice to be honest. However, I hope
that we could find better solution for that problem.

Daniel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]               ` <50AED7B8.7040902@citrix.com>
  2012-11-23  8:01                 ` Bouchard Louis
@ 2012-11-23  9:53                 ` Jan Beulich
       [not found]                 ` <50AF55B102000078000AABF3@nat28.tlf.novell.com>
  2 siblings, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2012-11-23  9:53 UTC (permalink / raw)
  To: Andrew Cooper, H. Peter Anvin
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Daniel Kiper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, tglx@linutronix.de

>>> On 23.11.12 at 02:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 23/11/2012 01:38, H. Peter Anvin wrote:
>> I still don't really get why it can't be isolated from dom0, which would 
> make more sense to me, even for a Xen crash.
>>
> 
> The crash region (as specified by crashkernel= on the Xen command line)
> is isolated from dom0.
> 
> dom0 (using the kexec utility etc) has the task of locating the Xen
> crash notes (using the kexec hypercall interface), constructing a binary
> blob containing kernel, initram and gubbins, and asking Xen to put this
> blob in the crash region (again, using the kexec hypercall interface).
> 
> I do not see how this is very much different from the native case
> currently (although please correct me if I am misinformed).  Linux has
> extra work to do by populating /proc/iomem with the Xen crash regions
> boot (so the kexec utility can reference their physical addresses when
> constructing the blob), and should just act as a conduit between the
> kexec system call and the kexec hypercall to load the blob.

But all of this _could_ be done completely independent of the
Dom0 kernel's kexec infrastructure (i.e. fully from user space,
invoking the necessary hypercalls through the privcmd driver).
It's just that parts of the kexec infrastructure can be re-used
(and hence that mechanism probably seemed the easier approach
to the implementer of the original kexec-on-Xen). If the kernel
folks dislike that re-use (quite understandably looking at how
much of it needs to be re-done), that shouldn't prevent us from
looking into the existing alternatives.

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
  2012-11-22 17:37         ` H. Peter Anvin
@ 2012-11-23  9:56           ` Jan Beulich
  2012-11-23 10:53             ` [Xen-devel] " Ian Campbell
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Beulich @ 2012-11-23  9:56 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: xen-devel, konrad.wilk, andrew.cooper3, Daniel Kiper, x86, kexec,
	linux-kernel, virtualization, mingo, Eric W. Biederman, tglx

>>> On 22.11.12 at 18:37, "H. Peter Anvin" <hpa@zytor.com> wrote:
> I actually talked to Ian Jackson at LCE, and mentioned among other 
> things the bogosity of requiring a PUD page for three-level paging in 
> Linux -- a bogosity which has spread from Xen into native.  It's a page 
> wasted for no good reason, since it only contains 32 bytes worth of 
> data, *inherently*.  Furthermore, contrary to popular belief, it is 
> *not* pa page table per se.
> 
> Ian told me: "I didn't know we did that, and we shouldn't have to." 
> Here we have suffered this overhead for at least six years, ...

Even the Xen kernel only needs the full page when running on a
64-bit hypervisor (now that we don't have a 32-bit hypervisor
anymore, that of course basically means always). But yes, I too
never liked this enforced over-allocation for native kernels (and
was surprised that it was allowed in at all).

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]                 ` <50AF55B102000078000AABF3@nat28.tlf.novell.com>
@ 2012-11-23 10:37                   ` Daniel Kiper
       [not found]                   ` <20121123103726.GB2921@host-192-168-1-59.local.net-space.pl>
  1 sibling, 0 replies; 23+ messages in thread
From: Daniel Kiper @ 2012-11-23 10:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Andrew Cooper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, H. Peter Anvin, tglx@linutronix.de

On Fri, Nov 23, 2012 at 09:53:37AM +0000, Jan Beulich wrote:
> >>> On 23.11.12 at 02:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 23/11/2012 01:38, H. Peter Anvin wrote:
> >> I still don't really get why it can't be isolated from dom0, which would
> > make more sense to me, even for a Xen crash.
> >>
> >
> > The crash region (as specified by crashkernel= on the Xen command line)
> > is isolated from dom0.
> >
> > dom0 (using the kexec utility etc) has the task of locating the Xen
> > crash notes (using the kexec hypercall interface), constructing a binary
> > blob containing kernel, initram and gubbins, and asking Xen to put this
> > blob in the crash region (again, using the kexec hypercall interface).
> >
> > I do not see how this is very much different from the native case
> > currently (although please correct me if I am misinformed).  Linux has
> > extra work to do by populating /proc/iomem with the Xen crash regions
> > boot (so the kexec utility can reference their physical addresses when
> > constructing the blob), and should just act as a conduit between the
> > kexec system call and the kexec hypercall to load the blob.
>
> But all of this _could_ be done completely independent of the
> Dom0 kernel's kexec infrastructure (i.e. fully from user space,
> invoking the necessary hypercalls through the privcmd driver).

No, this is impossible. kexec/kdump image lives in dom0 kernel memory
until execution. That is why privcmd driver itself is not a solution
in this case.

> It's just that parts of the kexec infrastructure can be re-used
> (and hence that mechanism probably seemed the easier approach
> to the implementer of the original kexec-on-Xen). If the kernel
> folks dislike that re-use (quite understandably looking at how
> much of it needs to be re-done), that shouldn't prevent us from
> looking into the existing alternatives.

This is last resort option. First I think we should try to find
good solution which reuses existing code as much as possible.

Daniel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]                   ` <20121123103726.GB2921@host-192-168-1-59.local.net-space.pl>
@ 2012-11-23 10:51                     ` Ian Campbell
  2012-11-23 10:51                     ` Jan Beulich
       [not found]                     ` <1353667868.13542.218.camel@zakaz.uk.xensource.com>
  2 siblings, 0 replies; 23+ messages in thread
From: Ian Campbell @ 2012-11-23 10:51 UTC (permalink / raw)
  To: Daniel Kiper
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Andrew Cooper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, Jan Beulich, H. Peter Anvin,
	tglx@linutronix.de

On Fri, 2012-11-23 at 10:37 +0000, Daniel Kiper wrote:
> On Fri, Nov 23, 2012 at 09:53:37AM +0000, Jan Beulich wrote:
> > >>> On 23.11.12 at 02:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > > The crash region (as specified by crashkernel= on the Xen command line)
> > > is isolated from dom0.
> > >[...]
> >
> > But all of this _could_ be done completely independent of the
> > Dom0 kernel's kexec infrastructure (i.e. fully from user space,
> > invoking the necessary hypercalls through the privcmd driver).
> 
> No, this is impossible. kexec/kdump image lives in dom0 kernel memory
> until execution.

Are you sure? I could have sworn they lived in the hypervisor owned
memory set aside by the crashkernel= parameter as Andy suggested.

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]                   ` <20121123103726.GB2921@host-192-168-1-59.local.net-space.pl>
  2012-11-23 10:51                     ` [Xen-devel] " Ian Campbell
@ 2012-11-23 10:51                     ` Jan Beulich
  2012-11-23 11:08                       ` Daniel Kiper
       [not found]                     ` <1353667868.13542.218.camel@zakaz.uk.xensource.com>
  2 siblings, 1 reply; 23+ messages in thread
From: Jan Beulich @ 2012-11-23 10:51 UTC (permalink / raw)
  To: Daniel Kiper
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Andrew Cooper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, H. Peter Anvin, tglx@linutronix.de

>>> On 23.11.12 at 11:37, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> On Fri, Nov 23, 2012 at 09:53:37AM +0000, Jan Beulich wrote:
>> >>> On 23.11.12 at 02:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> > On 23/11/2012 01:38, H. Peter Anvin wrote:
>> >> I still don't really get why it can't be isolated from dom0, which would
>> > make more sense to me, even for a Xen crash.
>> >>
>> >
>> > The crash region (as specified by crashkernel= on the Xen command line)
>> > is isolated from dom0.
>> >
>> > dom0 (using the kexec utility etc) has the task of locating the Xen
>> > crash notes (using the kexec hypercall interface), constructing a binary
>> > blob containing kernel, initram and gubbins, and asking Xen to put this
>> > blob in the crash region (again, using the kexec hypercall interface).
>> >
>> > I do not see how this is very much different from the native case
>> > currently (although please correct me if I am misinformed).  Linux has
>> > extra work to do by populating /proc/iomem with the Xen crash regions
>> > boot (so the kexec utility can reference their physical addresses when
>> > constructing the blob), and should just act as a conduit between the
>> > kexec system call and the kexec hypercall to load the blob.
>>
>> But all of this _could_ be done completely independent of the
>> Dom0 kernel's kexec infrastructure (i.e. fully from user space,
>> invoking the necessary hypercalls through the privcmd driver).
> 
> No, this is impossible. kexec/kdump image lives in dom0 kernel memory
> until execution. That is why privcmd driver itself is not a solution
> in this case.

Even if so, there's no fundamental reason why that kernel image
can't be put into Xen controlled space instead.

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH v2 01/11] kexec: introduce kexec_ops struct
  2012-11-23  9:56           ` Jan Beulich
@ 2012-11-23 10:53             ` Ian Campbell
  0 siblings, 0 replies; 23+ messages in thread
From: Ian Campbell @ 2012-11-23 10:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Andrew Cooper, Daniel Kiper, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, H. Peter Anvin, tglx@linutronix.de

On Fri, 2012-11-23 at 09:56 +0000, Jan Beulich wrote:
> >>> On 22.11.12 at 18:37, "H. Peter Anvin" <hpa@zytor.com> wrote:
> > I actually talked to Ian Jackson at LCE, and mentioned among other 

That was me actually (this happens surprisingly often ;-)).

> > things the bogosity of requiring a PUD page for three-level paging in 
> > Linux -- a bogosity which has spread from Xen into native.  It's a page 
> > wasted for no good reason, since it only contains 32 bytes worth of 
> > data, *inherently*.  Furthermore, contrary to popular belief, it is 
> > *not* pa page table per se.
> > 
> > Ian told me: "I didn't know we did that, and we shouldn't have to." 
> > Here we have suffered this overhead for at least six years, ...
> 
> Even the Xen kernel only needs the full page when running on a
> 64-bit hypervisor (now that we don't have a 32-bit hypervisor
> anymore, that of course basically means always).

I took an, admittedly very brief, look at it on the plane on the way
home and it seems like the requirement for a complete page on the
pvops-xen side comes from the !SHARED_KERNEL_PMD stuff (so still a Xen
related thing). This requires a struct page for the list_head it
contains (see pgd_list_add et al) rather than because of the use of the
page as a pgd as such.

>  But yes, I too
> never liked this enforced over-allocation for native kernels (and
> was surprised that it was allowed in at all).

Completely agreed.

I did wonder if just doing something like:
-	pgd = (pgd_t *)__get_free_page(PGALLOC_GFP);
+	if (SHARED_KERNEL_PMD)
+		pgd = some_appropriate_allocation_primitive(sizeof(*pgd));
+	else
+		pgd = (pgd_t *)__get_free_page(PGALLOC_GFP);

to pgd_alloc (+ the equivalent for the error path & free case, create
helper funcs as desired etc) would be sufficient to remove the over
allocation for the native case but haven't had time to properly
investigate.

Alternatively push the allocation down into paravirt_pgd_alloc to
taste :-/

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
  2012-11-23 10:51                     ` Jan Beulich
@ 2012-11-23 11:08                       ` Daniel Kiper
  0 siblings, 0 replies; 23+ messages in thread
From: Daniel Kiper @ 2012-11-23 11:08 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Andrew Cooper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, H. Peter Anvin, tglx@linutronix.de

On Fri, Nov 23, 2012 at 10:51:55AM +0000, Jan Beulich wrote:
> >>> On 23.11.12 at 11:37, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > On Fri, Nov 23, 2012 at 09:53:37AM +0000, Jan Beulich wrote:
> >> >>> On 23.11.12 at 02:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >> > On 23/11/2012 01:38, H. Peter Anvin wrote:
> >> >> I still don't really get why it can't be isolated from dom0, which would
> >> > make more sense to me, even for a Xen crash.
> >> >>
> >> >
> >> > The crash region (as specified by crashkernel= on the Xen command line)
> >> > is isolated from dom0.
> >> >
> >> > dom0 (using the kexec utility etc) has the task of locating the Xen
> >> > crash notes (using the kexec hypercall interface), constructing a binary
> >> > blob containing kernel, initram and gubbins, and asking Xen to put this
> >> > blob in the crash region (again, using the kexec hypercall interface).
> >> >
> >> > I do not see how this is very much different from the native case
> >> > currently (although please correct me if I am misinformed).  Linux has
> >> > extra work to do by populating /proc/iomem with the Xen crash regions
> >> > boot (so the kexec utility can reference their physical addresses when
> >> > constructing the blob), and should just act as a conduit between the
> >> > kexec system call and the kexec hypercall to load the blob.
> >>
> >> But all of this _could_ be done completely independent of the
> >> Dom0 kernel's kexec infrastructure (i.e. fully from user space,
> >> invoking the necessary hypercalls through the privcmd driver).
> >
> > No, this is impossible. kexec/kdump image lives in dom0 kernel memory
> > until execution. That is why privcmd driver itself is not a solution
> > in this case.
>
> Even if so, there's no fundamental reason why that kernel image
> can't be put into Xen controlled space instead.

Yep, but we must change Xen kexec interface and/or its behavior first.
If we take that option then we could also move almost all needed things
from dom0 kernel to Xen. This way we could simplify Linux Kernel
kexec/kdump infrastructure needed to run on Xen.

Daniel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]                     ` <1353667868.13542.218.camel@zakaz.uk.xensource.com>
@ 2012-11-23 11:13                       ` Daniel Kiper
  0 siblings, 0 replies; 23+ messages in thread
From: Daniel Kiper @ 2012-11-23 11:13 UTC (permalink / raw)
  To: Ian Campbell
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Andrew Cooper, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, Jan Beulich, H. Peter Anvin,
	tglx@linutronix.de

On Fri, Nov 23, 2012 at 10:51:08AM +0000, Ian Campbell wrote:
> On Fri, 2012-11-23 at 10:37 +0000, Daniel Kiper wrote:
> > On Fri, Nov 23, 2012 at 09:53:37AM +0000, Jan Beulich wrote:
> > > >>> On 23.11.12 at 02:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > > > The crash region (as specified by crashkernel= on the Xen command line)
> > > > is isolated from dom0.
> > > >[...]
> > >
> > > But all of this _could_ be done completely independent of the
> > > Dom0 kernel's kexec infrastructure (i.e. fully from user space,
> > > invoking the necessary hypercalls through the privcmd driver).
> >
> > No, this is impossible. kexec/kdump image lives in dom0 kernel memory
> > until execution.
>
> Are you sure? I could have sworn they lived in the hypervisor owned
> memory set aside by the crashkernel= parameter as Andy suggested.

I am sure. It is moved to final resting place when
relocate_kernel() is called by hypervisor.

Daniel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]         ` <20121123094516.GA2921@host-192-168-1-59.local.net-space.pl>
@ 2012-11-23 20:24           ` Eric W. Biederman
  0 siblings, 0 replies; 23+ messages in thread
From: Eric W. Biederman @ 2012-11-23 20:24 UTC (permalink / raw)
  To: Daniel Kiper
  Cc: xen-devel, konrad.wilk, andrew.cooper3, x86, kexec, linux-kernel,
	virtualization, mingo, jbeulich, hpa, tglx

Daniel Kiper <daniel.kiper@oracle.com> writes:

> On Thu, Nov 22, 2012 at 04:15:48AM -0800, ebiederm@xmission.com wrote:
>>
>> Is this for when the hypervisor crashes and we want a crash dump of
>> that?
>
> dom0 at boot gets some info about kexec/kdump configuration from Xen hypervisor
> (e.g. placement of crash kernel area). Later if you call kexec syscall most
> things are done in the same way as on baremetal. However, after placing image
> in memory, HYPERVISOR_kexec_op() hypercall must be called to inform hypervisor
> that image is loaded (new hook machine_kexec_load is used for this;
> machine_kexec_unload is used for unload). Then Xen establishes fixmap for pages
> found in page_list[] and returns control to dom0. If dom0 crashes or "kexec execute"
> is used by user then dom0 calls HYPERVISOR_kexec_op() to instruct hypervisor that
> kexec/kdump image should be executed immediately. Xen calls relocate_kernel()
> and all things runs as usual.


Close

>> Successful code reuse depends upon not breaking the assumptions on which
>> the code relies, or modifying the code so that the new modified
>> assumptions are clear.  In this case you might as well define up as down
>> for all of the sense kexec_ops makes.
>
> Hmmm... Well, problem with above mentioned functions is that they work
> on physical addresses. In Xen PVOPS (currently dom0 is PVOPS) they
> are useless in kexec/kdump case. It means that physical addresses
> must be converted to/from machine addresses which has a real meaning
> in Xen PVOPS case. That is why those funtions were introduced.

Agreed operating on addresses that are relevant to the operation at hand
makes sense.

>> >> There may be a point to all of these but you are mixing and matching
>> >> things badly.
>> >
>> > Do you whish to split this kexec_ops struct to something which
>> > works with addresses and something which is reponsible for
>> > loading, unloading and executing kexec/kdump? I am able to change
>> > that but I would like to know a bit about your vision first.
>>
>> My vision is that we should have code that makes sense.
>>
>> My suspicion is that what you want is a cousin of the existing kexec
>> system call.  Perhaps what is needed is a flag to say use the firmware
>> kexec system call.
>>
>> I absolutely do not understand what Xen is trying to do.  kexec by
>> design should not require any firmware specific hooks.  kexec at this
>> level should only need to care about the processor architeture.  Clearly
>> what you are doing with Xen requires special hooks separate even from
>> the normal paravirt hooks.  So I do not understand you are trying to do.
>>
>> It needs to be clear from the code what is happening differently in the
>> Xen case.  Otherwise the code is unmaintainable as no one will be able
>> to understand it.
>
> I agree. I could remove all machine_* hooks from kexec_ops and call Xen
> specific functions from arch files. However, I need to add two new
> machine calls, machine_kexec_load and machine_kexec_unload, in the same
> manner as existing machine_* calls. In general they could be used to inform
> firmware (in this case Xen) that kexec/kdump image is loaded.
>
> kimage_alloc_pages, kimage_free_pages, page_to_pfn, pfn_to_page, virt_to_phys
> and phys_to_virt are worse. If we could not find good solution how to replace
> them then we end up with calling Xen specific version of kexec/kdump which
> would contain nearly full copy of exisiting kexec/kdump code. Not good.
>
> We could add some code to kernel/kexec.c which depends on CONFIG_XEN.
> It could contain above mentioned functions which later will be called
> by existing kexec code. This is not nice to be honest. However, I hope
> that we could find better solution for that problem.

Since in the Xen case you are not performing a normal kexec or kdump if
you are going to continue to use the kexec system call then another flag
(like the KEXEC_ON_CRASH flag) should be used.

The userspace flag should be something like KEXEC_HYPERVISOR.  From
there we can have a generic interface that feeds into whatever the Xen
infrastructure is.  And if any other hypervisors implement kexec like
functionality it could feed into them if we so choose.

When the choice is clearly between a linux-only kexec and for a hypervisor
level kexec using different functions to understand the target addresses
makes sense.

And of course /sbin/kexec can easity take an additional flag to say load
the kexec image to the hypervisor.

Eric

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] kexec: introduce kexec_ops struct
       [not found]             ` <09b41677-c9e7-4cd4-84a0-a1cb483d551d@email.android.com>
@ 2014-03-31 10:50               ` Petr Tesarik
  0 siblings, 0 replies; 23+ messages in thread
From: Petr Tesarik @ 2014-03-31 10:50 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	Andrew Cooper, Daniel Kiper, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	Eric W. Biederman, jbeulich@suse.com, tglx@linutronix.de

On Thu, 22 Nov 2012 14:26:10 -0800
"H. Peter Anvin" <hpa@zytor.com> wrote:

> Bullshit.  This should be a separate domain.

Thanks for top-posting, hpa...

> Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> >On 22/11/12 17:47, H. Peter Anvin wrote:
> >> The other thing that should be considered here is how utterly 
> >> preposterous the notion of doing in-guest crash dumping is in a
> >system 
> >> that contains a hypervisor.  The reason for kdump is that on bare
> >metal 
> >> there are no other options, but in a hypervisor system the right
> >thing 
> >> should be for the hypervisor to do the dump (possibly spawning a
> >clean 
> >> I/O domain if the I/O domain is necessary to access the media.)
> >>
> >> There is absolutely no reason to have a crashkernel sitting around in
> >
> >> each guest, consuming memory, and possibly get corrupt.
> >>
> >> 	-hpa
> >>
> >
> >I agree that regular guests should not be using the kexec/kdump. 
> >However, this patch series is required for allowing a pvops kernel to
> >be
> >a crash kernel for Xen, which is very important from dom0/Xen's point
> >of
> >view.

In fact, a normal kernel is used for dumping, so it can handle both,
Dom0 crashes _and_ hypervisor crashes. If you wanted to address
hypervisor crashes, you'd have to allocate some space for that, too, so
you may view this "madness" as a way to conserve resources.

The memory area is reserved by the Xen hypervisor, and only the extents
are passed down to the Dom0 kernel. In other words, there is indeed no
physical mapping for this area.

Having said that, I see no reason why that physical mapping cannot be
created if it is needed.

Petr T

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2014-03-31 10:50 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1353423893-23125-1-git-send-email-daniel.kiper@oracle.com>
     [not found] ` <1353423893-23125-2-git-send-email-daniel.kiper@oracle.com>
     [not found]   ` <1353423893-23125-3-git-send-email-daniel.kiper@oracle.com>
2012-11-20 15:52     ` [PATCH v2 02/11] x86/kexec: Add extra pointers to transition page table PGD, PUD, PMD and PTE Jan Beulich
2012-11-20 16:40   ` [PATCH v2 01/11] kexec: introduce kexec_ops struct Eric W. Biederman
     [not found]     ` <20121121105221.GA2925@host-192-168-1-59.local.net-space.pl>
2012-11-22 12:15       ` Eric W. Biederman
2012-11-22 17:37         ` H. Peter Anvin
2012-11-23  9:56           ` Jan Beulich
2012-11-23 10:53             ` [Xen-devel] " Ian Campbell
2012-11-22 17:47         ` H. Peter Anvin
     [not found]         ` <50AE6542.3020302@zytor.com>
2012-11-22 18:07           ` Andrew Cooper
     [not found]           ` <50AE69EF.4060909@citrix.com>
2012-11-22 22:26             ` H. Peter Anvin
     [not found]             ` <09b41677-c9e7-4cd4-84a0-a1cb483d551d@email.android.com>
2014-03-31 10:50               ` Petr Tesarik
2012-11-23  0:12           ` Andrew Cooper
     [not found]           ` <50AEBF86.50501@citrix.com>
2012-11-23  1:34             ` H. Peter Anvin
2012-11-23  1:38             ` H. Peter Anvin
2012-11-23  1:56               ` Andrew Cooper
     [not found]               ` <50AED7B8.7040902@citrix.com>
2012-11-23  8:01                 ` Bouchard Louis
2012-11-23  9:53                 ` Jan Beulich
     [not found]                 ` <50AF55B102000078000AABF3@nat28.tlf.novell.com>
2012-11-23 10:37                   ` Daniel Kiper
     [not found]                   ` <20121123103726.GB2921@host-192-168-1-59.local.net-space.pl>
2012-11-23 10:51                     ` [Xen-devel] " Ian Campbell
2012-11-23 10:51                     ` Jan Beulich
2012-11-23 11:08                       ` Daniel Kiper
     [not found]                     ` <1353667868.13542.218.camel@zakaz.uk.xensource.com>
2012-11-23 11:13                       ` [Xen-devel] " Daniel Kiper
2012-11-23  9:47         ` Daniel Kiper
     [not found]         ` <20121123094516.GA2921@host-192-168-1-59.local.net-space.pl>
2012-11-23 20:24           ` Eric W. Biederman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).