* [PATCH] xen: Improvements to domain_crash_sync()
@ 2018-02-05 11:16 Andrew Cooper
2018-02-05 13:44 ` Jan Beulich
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Andrew Cooper @ 2018-02-05 11:16 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich
The use of __LINE__ in a printk() is problematic for livepatching, as it
causes unnecessary binary differences.
Furthermore, diagnostic information around calls is inconsistent and
occasionally unhelpful. (e.g. diagnosing logs from the field which might be
release builds, or likely without exact source code).
Take the opportunity to improve things. Shorten the name to
domain_crash_sync() and require the user to pass a print message in.
Internally, the calling function is identified, and the message is emitted as
a non-debug guest error.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
xen/arch/x86/traps.c | 5 +----
xen/common/domain.c | 2 +-
xen/common/wait.c | 16 ++++------------
xen/include/xen/sched.h | 11 ++++++-----
5 files changed, 13 insertions(+), 23 deletions(-)
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index e7818ca..5370ffa 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1669,7 +1669,7 @@ void vmx_vmentry_failure(void)
error == VMX_INSN_INVALID_HOST_STATE )
vmcs_dump_vcpu(curr);
- domain_crash_synchronous();
+ domain_crash_sync("\n"); /* Nothing further interesting to print. */
}
void vmx_do_resume(struct vcpu *v)
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 1187fd9..3c11582 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2111,10 +2111,7 @@ void asm_domain_crash_synchronous(unsigned long addr)
if ( addr == 0 )
addr = this_cpu(last_extable_addr);
- printk("domain_crash_sync called from entry.S: fault at %p %pS\n",
- _p(addr), _p(addr));
-
- __domain_crash_synchronous();
+ domain_crash_sync("entry.S fault at %pS [%p]\n", _p(addr), _p(addr));
}
/*
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4567773..d3ba2c2 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -685,7 +685,7 @@ void __domain_crash(struct domain *d)
}
-void __domain_crash_synchronous(void)
+void __domain_crash_sync(void)
{
__domain_crash(current->domain);
diff --git a/xen/common/wait.c b/xen/common/wait.c
index a57bc10..153a59e 100644
--- a/xen/common/wait.c
+++ b/xen/common/wait.c
@@ -133,10 +133,7 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv)
wqv->wakeup_cpu = smp_processor_id();
cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
- {
- gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
- domain_crash_synchronous();
- }
+ domain_crash_sync("Unable to set vcpu affinity\n");
/* Hand-rolled setjmp(). */
asm volatile (
@@ -164,10 +161,7 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv)
: "memory" );
if ( unlikely(wqv->esp == 0) )
- {
- gdprintk(XENLOG_ERR, "Stack too large in %s\n", __func__);
- domain_crash_synchronous();
- }
+ domain_crash_sync("Stack too large\n");
cpu_info->guest_cpu_user_regs.entry_vector = entry_vector;
}
@@ -194,10 +188,8 @@ void check_wakeup_from_wait(void)
struct vcpu *curr = current;
cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
- {
- gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
- domain_crash_synchronous();
- }
+ domain_crash_sync("Unable to set vcpu affinity\n");
+
wait(); /* takes us back into the scheduler */
}
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 39f9386..1da93aa 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -627,11 +627,12 @@ void __domain_crash(struct domain *d);
* Mark current domain as crashed and synchronously deschedule from the local
* processor. This function never returns.
*/
-void noreturn __domain_crash_synchronous(void);
-#define domain_crash_synchronous() do { \
- printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__); \
- __domain_crash_synchronous(); \
-} while (0)
+void noreturn __domain_crash_sync(void);
+#define domain_crash_sync(fmt, args...) do { \
+ printk(XENLOG_G_ERR "domain_crash_sync called from %s: " fmt, \
+ __func__, ## args); \
+ __domain_crash_sync(); \
+ } while (0)
/*
* Called from assembly code, with an optional address to help indicate why
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH] xen: Improvements to domain_crash_sync()
2018-02-05 11:16 [PATCH] xen: Improvements to domain_crash_sync() Andrew Cooper
@ 2018-02-05 13:44 ` Jan Beulich
2018-02-05 15:34 ` Andrew Cooper
2018-02-05 18:01 ` Konrad Rzeszutek Wilk
2018-02-06 2:18 ` Tian, Kevin
2 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2018-02-05 13:44 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Kevin Tian, Jun Nakajima, Xen-devel
>>> On 05.02.18 at 12:16, <andrew.cooper3@citrix.com> wrote:
> The use of __LINE__ in a printk() is problematic for livepatching, as it
> causes unnecessary binary differences.
>
> Furthermore, diagnostic information around calls is inconsistent and
> occasionally unhelpful. (e.g. diagnosing logs from the field which might be
> release builds, or likely without exact source code).
>
> Take the opportunity to improve things. Shorten the name to
> domain_crash_sync() and require the user to pass a print message in.
First of all I'd like to re-iterate that a long time ago a plan was
formulated to entirely remove synchronous domain crashing. If I
leave aside the three uses in wait.c (which you say you want to
remove in its entirety anyway rather sooner than later), there
are two other call sites. Wouldn't it therefore be more productive
to actually get rid of those?
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -627,11 +627,12 @@ void __domain_crash(struct domain *d);
> * Mark current domain as crashed and synchronously deschedule from the local
> * processor. This function never returns.
> */
> -void noreturn __domain_crash_synchronous(void);
> -#define domain_crash_synchronous() do { \
> - printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__); \
> - __domain_crash_synchronous(); \
> -} while (0)
> +void noreturn __domain_crash_sync(void);
> +#define domain_crash_sync(fmt, args...) do { \
> + printk(XENLOG_G_ERR "domain_crash_sync called from %s: " fmt, \
> + __func__, ## args); \
> + __domain_crash_sync(); \
> + } while (0)
If we really want to keep the functionality, may I then suggest
that you at least avoid retaining name space violation here? E.g.
rename what currently is domain_crash_synchronous() to
domain_crash_sync() as you already do, but rename the backing
__domain_crash_synchronous() to domain_crash_synchronous().
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] xen: Improvements to domain_crash_sync()
2018-02-05 13:44 ` Jan Beulich
@ 2018-02-05 15:34 ` Andrew Cooper
2018-02-05 16:17 ` Jan Beulich
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2018-02-05 15:34 UTC (permalink / raw)
To: Jan Beulich; +Cc: Kevin Tian, Jun Nakajima, Xen-devel
On 05/02/18 13:44, Jan Beulich wrote:
>>>> On 05.02.18 at 12:16, <andrew.cooper3@citrix.com> wrote:
>> The use of __LINE__ in a printk() is problematic for livepatching, as it
>> causes unnecessary binary differences.
>>
>> Furthermore, diagnostic information around calls is inconsistent and
>> occasionally unhelpful. (e.g. diagnosing logs from the field which might be
>> release builds, or likely without exact source code).
>>
>> Take the opportunity to improve things. Shorten the name to
>> domain_crash_sync() and require the user to pass a print message in.
> First of all I'd like to re-iterate that a long time ago a plan was
> formulated to entirely remove synchronous domain crashing. If I
> leave aside the three uses in wait.c (which you say you want to
> remove in its entirety anyway rather sooner than later), there
> are two other call sites. Wouldn't it therefore be more productive
> to actually get rid of those?
The asm_domain_crash_synchronous() callsite is also heading for the
axe. I've already deleted it in my series pulling bounce frame handling
up into C.
The vmx_vmentry_failure() callsite looks like it can turn into
domain_crash() by allowing the function to return and re-enter the
softirq processing path.
Given that, I'd be happy to get rid of the domain_crash_sync()
infrastructure eventually, but given how far off the deletion patches
are, I'd still like to drop the __LINE__ reference in the short term.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] xen: Improvements to domain_crash_sync()
2018-02-05 15:34 ` Andrew Cooper
@ 2018-02-05 16:17 ` Jan Beulich
2018-02-05 16:24 ` Andrew Cooper
0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2018-02-05 16:17 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Kevin Tian, Jun Nakajima, Xen-devel
>>> On 05.02.18 at 16:34, <andrew.cooper3@citrix.com> wrote:
> On 05/02/18 13:44, Jan Beulich wrote:
>>>>> On 05.02.18 at 12:16, <andrew.cooper3@citrix.com> wrote:
>>> The use of __LINE__ in a printk() is problematic for livepatching, as it
>>> causes unnecessary binary differences.
>>>
>>> Furthermore, diagnostic information around calls is inconsistent and
>>> occasionally unhelpful. (e.g. diagnosing logs from the field which might be
>>> release builds, or likely without exact source code).
>>>
>>> Take the opportunity to improve things. Shorten the name to
>>> domain_crash_sync() and require the user to pass a print message in.
>> First of all I'd like to re-iterate that a long time ago a plan was
>> formulated to entirely remove synchronous domain crashing. If I
>> leave aside the three uses in wait.c (which you say you want to
>> remove in its entirety anyway rather sooner than later), there
>> are two other call sites. Wouldn't it therefore be more productive
>> to actually get rid of those?
>
> The asm_domain_crash_synchronous() callsite is also heading for the
> axe. I've already deleted it in my series pulling bounce frame handling
> up into C.
>
> The vmx_vmentry_failure() callsite looks like it can turn into
> domain_crash() by allowing the function to return and re-enter the
> softirq processing path.
>
> Given that, I'd be happy to get rid of the domain_crash_sync()
> infrastructure eventually, but given how far off the deletion patches
> are, I'd still like to drop the __LINE__ reference in the short term.
I can live with that, but please make clear in the commit message
that this is intended to die (so that people won't use improvements
being done here as argument to add new users).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] xen: Improvements to domain_crash_sync()
2018-02-05 16:17 ` Jan Beulich
@ 2018-02-05 16:24 ` Andrew Cooper
2018-02-05 16:30 ` Jan Beulich
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2018-02-05 16:24 UTC (permalink / raw)
To: Jan Beulich; +Cc: Kevin Tian, Jun Nakajima, Xen-devel
On 05/02/18 16:17, Jan Beulich wrote:
>>>> On 05.02.18 at 16:34, <andrew.cooper3@citrix.com> wrote:
>> On 05/02/18 13:44, Jan Beulich wrote:
>>>>>> On 05.02.18 at 12:16, <andrew.cooper3@citrix.com> wrote:
>>>> The use of __LINE__ in a printk() is problematic for livepatching, as it
>>>> causes unnecessary binary differences.
>>>>
>>>> Furthermore, diagnostic information around calls is inconsistent and
>>>> occasionally unhelpful. (e.g. diagnosing logs from the field which might be
>>>> release builds, or likely without exact source code).
>>>>
>>>> Take the opportunity to improve things. Shorten the name to
>>>> domain_crash_sync() and require the user to pass a print message in.
>>> First of all I'd like to re-iterate that a long time ago a plan was
>>> formulated to entirely remove synchronous domain crashing. If I
>>> leave aside the three uses in wait.c (which you say you want to
>>> remove in its entirety anyway rather sooner than later), there
>>> are two other call sites. Wouldn't it therefore be more productive
>>> to actually get rid of those?
>> The asm_domain_crash_synchronous() callsite is also heading for the
>> axe. I've already deleted it in my series pulling bounce frame handling
>> up into C.
>>
>> The vmx_vmentry_failure() callsite looks like it can turn into
>> domain_crash() by allowing the function to return and re-enter the
>> softirq processing path.
>>
>> Given that, I'd be happy to get rid of the domain_crash_sync()
>> infrastructure eventually, but given how far off the deletion patches
>> are, I'd still like to drop the __LINE__ reference in the short term.
> I can live with that, but please make clear in the commit message
> that this is intended to die (so that people won't use improvements
> being done here as argument to add new users).
Actually, on further consideration, its probably best to drop
domain_crash_sync() entirely, and opencode the softirq loop in the few
cases of almost-removed code. That would completely prevent people from
introducing new uses.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] xen: Improvements to domain_crash_sync()
2018-02-05 16:24 ` Andrew Cooper
@ 2018-02-05 16:30 ` Jan Beulich
0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2018-02-05 16:30 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Kevin Tian, Jun Nakajima, Xen-devel
>>> On 05.02.18 at 17:24, <andrew.cooper3@citrix.com> wrote:
> Actually, on further consideration, its probably best to drop
> domain_crash_sync() entirely, and opencode the softirq loop in the few
> cases of almost-removed code. That would completely prevent people from
> introducing new uses.
The three uses in wait.c may be a little ugly this way, but beyond
that I think I'd be fine with such an approach.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] xen: Improvements to domain_crash_sync()
2018-02-05 11:16 [PATCH] xen: Improvements to domain_crash_sync() Andrew Cooper
2018-02-05 13:44 ` Jan Beulich
@ 2018-02-05 18:01 ` Konrad Rzeszutek Wilk
2018-02-06 2:18 ` Tian, Kevin
2 siblings, 0 replies; 8+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-02-05 18:01 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Kevin Tian, Jan Beulich, Jun Nakajima, Xen-devel
On Mon, Feb 05, 2018 at 11:16:55AM +0000, Andrew Cooper wrote:
> The use of __LINE__ in a printk() is problematic for livepatching, as it
> causes unnecessary binary differences.
>
> Furthermore, diagnostic information around calls is inconsistent and
> occasionally unhelpful. (e.g. diagnosing logs from the field which might be
> release builds, or likely without exact source code).
>
> Take the opportunity to improve things. Shorten the name to
> domain_crash_sync() and require the user to pass a print message in.
>
> Internally, the calling function is identified, and the message is emitted as
> a non-debug guest error.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> ---
> xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
> xen/arch/x86/traps.c | 5 +----
> xen/common/domain.c | 2 +-
> xen/common/wait.c | 16 ++++------------
> xen/include/xen/sched.h | 11 ++++++-----
> 5 files changed, 13 insertions(+), 23 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index e7818ca..5370ffa 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1669,7 +1669,7 @@ void vmx_vmentry_failure(void)
> error == VMX_INSN_INVALID_HOST_STATE )
> vmcs_dump_vcpu(curr);
>
> - domain_crash_synchronous();
> + domain_crash_sync("\n"); /* Nothing further interesting to print. */
> }
>
> void vmx_do_resume(struct vcpu *v)
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 1187fd9..3c11582 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -2111,10 +2111,7 @@ void asm_domain_crash_synchronous(unsigned long addr)
> if ( addr == 0 )
> addr = this_cpu(last_extable_addr);
>
> - printk("domain_crash_sync called from entry.S: fault at %p %pS\n",
> - _p(addr), _p(addr));
> -
> - __domain_crash_synchronous();
> + domain_crash_sync("entry.S fault at %pS [%p]\n", _p(addr), _p(addr));
> }
>
> /*
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 4567773..d3ba2c2 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -685,7 +685,7 @@ void __domain_crash(struct domain *d)
> }
>
>
> -void __domain_crash_synchronous(void)
> +void __domain_crash_sync(void)
> {
> __domain_crash(current->domain);
>
> diff --git a/xen/common/wait.c b/xen/common/wait.c
> index a57bc10..153a59e 100644
> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -133,10 +133,7 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv)
> wqv->wakeup_cpu = smp_processor_id();
> cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
> if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
> - {
> - gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
> - domain_crash_synchronous();
> - }
> + domain_crash_sync("Unable to set vcpu affinity\n");
>
> /* Hand-rolled setjmp(). */
> asm volatile (
> @@ -164,10 +161,7 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv)
> : "memory" );
>
> if ( unlikely(wqv->esp == 0) )
> - {
> - gdprintk(XENLOG_ERR, "Stack too large in %s\n", __func__);
> - domain_crash_synchronous();
> - }
> + domain_crash_sync("Stack too large\n");
>
> cpu_info->guest_cpu_user_regs.entry_vector = entry_vector;
> }
> @@ -194,10 +188,8 @@ void check_wakeup_from_wait(void)
> struct vcpu *curr = current;
> cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
> if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
> - {
> - gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
> - domain_crash_synchronous();
> - }
> + domain_crash_sync("Unable to set vcpu affinity\n");
> +
> wait(); /* takes us back into the scheduler */
> }
>
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 39f9386..1da93aa 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -627,11 +627,12 @@ void __domain_crash(struct domain *d);
> * Mark current domain as crashed and synchronously deschedule from the local
> * processor. This function never returns.
> */
> -void noreturn __domain_crash_synchronous(void);
> -#define domain_crash_synchronous() do { \
> - printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__); \
> - __domain_crash_synchronous(); \
> -} while (0)
> +void noreturn __domain_crash_sync(void);
> +#define domain_crash_sync(fmt, args...) do { \
> + printk(XENLOG_G_ERR "domain_crash_sync called from %s: " fmt, \
> + __func__, ## args); \
> + __domain_crash_sync(); \
> + } while (0)
>
> /*
> * Called from assembly code, with an optional address to help indicate why
> --
> 2.1.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] xen: Improvements to domain_crash_sync()
2018-02-05 11:16 [PATCH] xen: Improvements to domain_crash_sync() Andrew Cooper
2018-02-05 13:44 ` Jan Beulich
2018-02-05 18:01 ` Konrad Rzeszutek Wilk
@ 2018-02-06 2:18 ` Tian, Kevin
2 siblings, 0 replies; 8+ messages in thread
From: Tian, Kevin @ 2018-02-06 2:18 UTC (permalink / raw)
To: Andrew Cooper, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Monday, February 5, 2018 7:17 PM
>
> The use of __LINE__ in a printk() is problematic for livepatching, as it
> causes unnecessary binary differences.
>
> Furthermore, diagnostic information around calls is inconsistent and
> occasionally unhelpful. (e.g. diagnosing logs from the field which might be
> release builds, or likely without exact source code).
>
> Take the opportunity to improve things. Shorten the name to
> domain_crash_sync() and require the user to pass a print message in.
>
> Internally, the calling function is identified, and the message is emitted as
> a non-debug guest error.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2018-02-06 2:18 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-05 11:16 [PATCH] xen: Improvements to domain_crash_sync() Andrew Cooper
2018-02-05 13:44 ` Jan Beulich
2018-02-05 15:34 ` Andrew Cooper
2018-02-05 16:17 ` Jan Beulich
2018-02-05 16:24 ` Andrew Cooper
2018-02-05 16:30 ` Jan Beulich
2018-02-05 18:01 ` Konrad Rzeszutek Wilk
2018-02-06 2:18 ` Tian, Kevin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).