xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel@lists.xenproject.org, konrad@kernel.org,
	mpohlack@amazon.de, ross.lagerwall@citrix.com,
	sasha.levin@citrix.com, jinsong.liu@alibaba-inc.com,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH v3 07/23] xsplice: Implement support for applying/reverting/replacing patches. (v5)
Date: Tue, 16 Feb 2016 19:11:10 +0000	[thread overview]
Message-ID: <56C3744E.8000702@citrix.com> (raw)
In-Reply-To: <1455300361-13092-8-git-send-email-konrad.wilk@oracle.com>

On 12/02/16 18:05, Konrad Rzeszutek Wilk wrote:
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 9d43f7b..b5995b9 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -36,6 +36,7 @@
>  #include <xen/cpu.h>
>  #include <xen/wait.h>
>  #include <xen/guest_access.h>
> +#include <xen/xsplice.h>
>  #include <public/sysctl.h>
>  #include <public/hvm/hvm_vcpu.h>
>  #include <asm/regs.h>
> @@ -121,6 +122,7 @@ static void idle_loop(void)
>          (*pm_idle)();
>          do_tasklet();
>          do_softirq();
> +        do_xsplice(); /* Must be last. */

Then name "do_xsplice()" is slightly misleading (although it is in equal
company here).  check_for_xsplice_work() would be more accurate.

> diff --git a/xen/arch/x86/xsplice.c b/xen/arch/x86/xsplice.c
> index 814dd52..ae35e91 100644
> --- a/xen/arch/x86/xsplice.c
> +++ b/xen/arch/x86/xsplice.c
> @@ -10,6 +10,25 @@
>                              __func__,__LINE__, x); return x; }
>  #endif
>  
> +#define PATCH_INSN_SIZE 5

Somewhere you should have a BUILD_BUG_ON() confirming that
PATCH_INSN_SIZE fits within the undo array.

Having said that, I think all of xsplice_patch_func should be
arch-specific rather than generic.

> +
> +void xsplice_apply_jmp(struct xsplice_patch_func *func)
> +{
> +    uint32_t val;
> +    uint8_t *old_ptr;
> +
> +    old_ptr = (uint8_t *)func->old_addr;
> +    memcpy(func->undo, old_ptr, PATCH_INSN_SIZE);

At least a newline here please.

> +    *old_ptr++ = 0xe9; /* Relative jump */
> +    val = func->new_addr - func->old_addr - PATCH_INSN_SIZE;

E9 takes a rel32 parameter, which is signed.

I think you need to explicitly cast to intptr_t and used signed
arithmetic throughout this calculation to correctly calculate a
backwards jump.

I think there should also be some sanity checks that both old_addr and
new_addr are in the Xen 1G virtual region.

> +    memcpy(old_ptr, &val, sizeof val);
> +}
> +
> +void xsplice_revert_jmp(struct xsplice_patch_func *func)
> +{
> +    memcpy((void *)func->old_addr, func->undo, PATCH_INSN_SIZE);

_p() is common shorthand in Xen for a cast to (void *)

> +}
> +
>  int xsplice_verify_elf(struct xsplice_elf *elf, uint8_t *data)
>  {
>  
> diff --git a/xen/common/xsplice.c b/xen/common/xsplice.c
> index fbd6129..b854c0a 100644
> --- a/xen/common/xsplice.c
> +++ b/xen/common/xsplice.c
> @@ -3,6 +3,7 @@
>   *
>   */
>  
> +#include <xen/cpu.h>
>  #include <xen/guest_access.h>
>  #include <xen/keyhandler.h>
>  #include <xen/lib.h>
> @@ -10,16 +11,25 @@
>  #include <xen/mm.h>
>  #include <xen/sched.h>
>  #include <xen/smp.h>
> +#include <xen/softirq.h>
>  #include <xen/spinlock.h>
> +#include <xen/wait.h>
>  #include <xen/xsplice_elf.h>
>  #include <xen/xsplice.h>
>  
>  #include <asm/event.h>
> +#include <asm/nmi.h>
>  #include <public/sysctl.h>
>  
> +/*
> + * Protects against payload_list operations and also allows only one
> + * caller in schedule_work.
> + */
>  static DEFINE_SPINLOCK(payload_lock);
>  static LIST_HEAD(payload_list);
>  
> +static LIST_HEAD(applied_list);
> +
>  static unsigned int payload_cnt;
>  static unsigned int payload_version = 1;
>  
> @@ -29,6 +39,9 @@ struct payload {
>      struct list_head list;               /* Linked to 'payload_list'. */
>      void *payload_address;               /* Virtual address mapped. */
>      size_t payload_pages;                /* Nr of the pages. */
> +    struct list_head applied_list;       /* Linked to 'applied_list'. */
> +    struct xsplice_patch_func *funcs;    /* The array of functions to patch. */
> +    unsigned int nfuncs;                 /* Nr of functions to patch. */
>  
>      char name[XEN_XSPLICE_NAME_SIZE + 1];/* Name of it. */
>  };
> @@ -36,6 +49,24 @@ struct payload {
>  static int load_payload_data(struct payload *payload, uint8_t *raw, ssize_t len);
>  static void free_payload_data(struct payload *payload);
>  
> +/* Defines an outstanding patching action. */
> +struct xsplice_work
> +{
> +    atomic_t semaphore;          /* Used for rendezvous. First to grab it will
> +                                    do the patching. */
> +    atomic_t irq_semaphore;      /* Used to signal all IRQs disabled. */
> +    uint32_t timeout;                    /* Timeout to do the operation. */
> +    struct payload *data;        /* The payload on which to act. */
> +    volatile bool_t do_work;     /* Signals work to do. */
> +    volatile bool_t ready;       /* Signals all CPUs synchronized. */
> +    uint32_t cmd;                /* Action request: XSPLICE_ACTION_* */
> +};
> +
> +/* There can be only one outstanding patching action. */
> +static struct xsplice_work xsplice_work;

This is a scalability issue, specifically that every cpu on the
return-to-guest path polls the bytes making up do_work.  See below for
my suggestion to fix this.

> +
> +static int schedule_work(struct payload *data, uint32_t cmd, uint32_t timeout);
> +
>  static const char *state2str(int32_t state)
>  {
>  #define STATE(x) [XSPLICE_STATE_##x] = #x
> @@ -61,14 +92,23 @@ static const char *state2str(int32_t state)
>  static void xsplice_printall(unsigned char key)
>  {
>      struct payload *data;
> +    unsigned int i;
>  
>      spin_lock(&payload_lock);
>  
>      list_for_each_entry ( data, &payload_list, list )
> -        printk(" name=%s state=%s(%d) %p using %zu pages.\n", data->name,
> +    {
> +        printk(" name=%s state=%s(%d) %p using %zu pages:\n", data->name,
>                 state2str(data->state), data->state, data->payload_address,
>                 data->payload_pages);
>  
> +        for ( i = 0; i < data->nfuncs; i++ )
> +        {
> +            struct xsplice_patch_func *f = &(data->funcs[i]);
> +            printk("    %s patch 0x%"PRIx64"(%u) with 0x%"PRIx64"(%u)\n",
> +                   f->name, f->old_addr, f->old_size, f->new_addr, f->new_size);
> +        }

For a large patch, this could be thousands of entries.  You need to
periodically process pending softirqs to avoid a watchdog timeout.

>  static int load_payload_data(struct payload *payload, uint8_t *raw, ssize_t len)
>  {
>      struct xsplice_elf elf;
> @@ -605,7 +695,14 @@ static int load_payload_data(struct payload *payload, uint8_t *raw, ssize_t len)
>      if ( rc )
>          goto err_payload;
>  
> -    /* Free our temporary data structure. */
> +    rc = check_special_sections(payload, &elf);
> +    if ( rc )
> +        goto err_payload;
> +
> +    rc = find_special_sections(payload, &elf);
> +    if ( rc )
> +        goto err_payload;
> +
>      xsplice_elf_free(&elf);
>      return 0;
>  
> @@ -617,6 +714,253 @@ static int load_payload_data(struct payload *payload, uint8_t *raw, ssize_t len)
>      return rc;
>  }
>  
> +
> +/*
> + * The following functions get the CPUs into an appropriate state and
> + * apply (or revert) each of the module's functions.
> + */
> +
> +/*
> + * This function is executed having all other CPUs with no stack (we may
> + * have cpu_idle on it) and IRQs disabled. We guard against NMI by temporarily
> + * installing our NOP NMI handler.
> + */
> +static int apply_payload(struct payload *data)
> +{
> +    unsigned int i;
> +
> +    printk(XENLOG_DEBUG "%s: Applying %u functions.\n", data->name,
> +           data->nfuncs);
> +
> +    for ( i = 0; i < data->nfuncs; i++ )
> +        xsplice_apply_jmp(data->funcs + i);

In cases such as this, it is better to use data->funcs[i], as the
compiler can more easily perform pointer alias analysis.

> +
> +    list_add_tail(&data->applied_list, &applied_list);
> +
> +    return 0;
> +}
> +
> +/*
> + * This function is executed having all other CPUs with no stack (we may
> + * have cpu_idle on it) and IRQs disabled.
> + */
> +static int revert_payload(struct payload *data)
> +{
> +    unsigned int i;
> +
> +    printk(XENLOG_DEBUG "%s: Reverting.\n", data->name);
> +
> +    for ( i = 0; i < data->nfuncs; i++ )
> +        xsplice_revert_jmp(data->funcs + i);
> +
> +    list_del(&data->applied_list);
> +
> +    return 0;
> +}
> +
> +/* Must be holding the payload_lock. */
> +static int schedule_work(struct payload *data, uint32_t cmd, uint32_t timeout)
> +{
> +    /* Fail if an operation is already scheduled. */
> +    if ( xsplice_work.do_work )
> +        return -EBUSY;
> +
> +    xsplice_work.cmd = cmd;
> +    xsplice_work.data = data;
> +    xsplice_work.timeout = timeout ? timeout : MILLISECS(30);

Can shorten to "timeout ?: MILLISECS(30);"

> +
> +    printk(XENLOG_DEBUG "%s: timeout is %"PRI_stime"ms\n", data->name,
> +           xsplice_work.timeout / MILLISECS(1));
> +
> +    atomic_set(&xsplice_work.semaphore, -1);
> +    atomic_set(&xsplice_work.irq_semaphore, -1);
> +
> +    xsplice_work.ready = 0;
> +    smp_wmb();
> +    xsplice_work.do_work = 1;
> +    smp_wmb();
> +
> +    return 0;
> +}
> +
> +/*
> + * Note that because of this NOP code the do_nmi is not safely patchable.
> + * Also if we do receive 'real' NMIs we have lost them.

The MCE path needs consideration as well.  Unlike the NMI path however,
that one cannot be ignored.

In both cases, it might be best to see about raising a tasklet or
softirq to pick up some deferred work.

> + */
> +static int mask_nmi_callback(const struct cpu_user_regs *regs, int cpu)
> +{
> +    return 1;
> +}
> +
> +static void reschedule_fn(void *unused)
> +{
> +    smp_mb(); /* Synchronize with setting do_work */
> +    raise_softirq(SCHEDULE_SOFTIRQ);

As you have to IPI each processor to raise a schedule softirq, you can
set a per-cpu "xsplice enter rendezvous" variable.  This prevents the
need for the return-to-guest path to poll one single byte.

> +}
> +
> +static int xsplice_do_wait(atomic_t *counter, s_time_t timeout,
> +                           unsigned int total_cpus, const char *s)
> +{
> +    int rc = 0;
> +
> +    while ( atomic_read(counter) != total_cpus && NOW() < timeout )
> +        cpu_relax();
> +
> +    /* Log & abort. */
> +    if ( atomic_read(counter) != total_cpus )
> +    {
> +        printk(XENLOG_DEBUG "%s: %s %u/%u\n", xsplice_work.data->name,
> +               s, atomic_read(counter), total_cpus);
> +        rc = -EBUSY;
> +        xsplice_work.data->rc = rc;
> +        xsplice_work.do_work = 0;
> +        smp_wmb();
> +        return rc;
> +    }
> +    return rc;
> +}
> +
> +static void xsplice_do_single(unsigned int total_cpus)
> +{
> +    nmi_callback_t saved_nmi_callback;
> +    struct payload *data, *tmp;
> +    s_time_t timeout;
> +    int rc;
> +
> +    data = xsplice_work.data;
> +    timeout = xsplice_work.timeout + NOW();
> +    if ( xsplice_do_wait(&xsplice_work.semaphore, timeout, total_cpus,
> +                         "Timed out on CPU semaphore") )
> +        return;
> +
> +    /* "Mask" NMIs. */
> +    saved_nmi_callback = set_nmi_callback(mask_nmi_callback);
> +
> +    /* All CPUs are waiting, now signal to disable IRQs. */
> +    xsplice_work.ready = 1;
> +    smp_wmb();
> +
> +    atomic_inc(&xsplice_work.irq_semaphore);
> +    if ( xsplice_do_wait(&xsplice_work.irq_semaphore, timeout, total_cpus,
> +                         "Timed out on IRQ semaphore.") )

This path "leaks" the NMI mask.  It would be better use "goto out;"
handling in the function to make it clearer that the error paths are
taken care of.

> +        return;
> +
> +    local_irq_disable();

local_irq_save().  Easier to restore on an error path.

> +    /* Now this function should be the only one on any stack.
> +     * No need to lock the payload list or applied list. */
> +    switch ( xsplice_work.cmd )
> +    {
> +    case XSPLICE_ACTION_APPLY:
> +        rc = apply_payload(data);
> +        if ( rc == 0 )
> +            data->state = XSPLICE_STATE_APPLIED;
> +        break;
> +    case XSPLICE_ACTION_REVERT:
> +        rc = revert_payload(data);
> +        if ( rc == 0 )
> +            data->state = XSPLICE_STATE_CHECKED;
> +        break;
> +    case XSPLICE_ACTION_REPLACE:
> +        list_for_each_entry_safe_reverse ( data, tmp, &applied_list, list )
> +        {
> +            data->rc = revert_payload(data);
> +            if ( data->rc == 0 )
> +                data->state = XSPLICE_STATE_CHECKED;
> +            else
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +        }
> +        if ( rc != -EINVAL )
> +        {
> +            rc = apply_payload(xsplice_work.data);
> +            if ( rc == 0 )
> +                xsplice_work.data->state = XSPLICE_STATE_APPLIED;
> +        }
> +        break;
> +    default:
> +        rc = -EINVAL;
> +        break;
> +    }

Once code modification is complete, you must execute a serialising
instruction such as cpuid, to flush the pipeline, on all cpus.  (Refer
to Intel SDM 3 8.1.4 "Handling Self- and Cross-Modifying Code")

> +
> +    xsplice_work.data->rc = rc;
> +
> +    local_irq_enable();
> +    set_nmi_callback(saved_nmi_callback);
> +
> +    xsplice_work.do_work = 0;
> +    smp_wmb(); /* Synchronize with waiting CPUs. */
> +}
> +
> +/*
> + * The main function which manages the work of quiescing the system and
> + * patching code.
> + */
> +void do_xsplice(void)
> +{
> +    struct payload *p = xsplice_work.data;
> +    unsigned int cpu = smp_processor_id();
> +
> +    /* Fast path: no work to do. */
> +    if ( likely(!xsplice_work.do_work) )
> +        return;
> +    ASSERT(local_irq_is_enabled());
> +
> +    /* Set at -1, so will go up to num_online_cpus - 1 */
> +    if ( atomic_inc_and_test(&xsplice_work.semaphore) )
> +    {
> +        unsigned int total_cpus;
> +
> +        if ( !get_cpu_maps() )
> +        {
> +            printk(XENLOG_DEBUG "%s: CPU%u - unable to get cpu_maps lock.\n",
> +                   p->name, cpu);
> +            xsplice_work.data->rc = -EBUSY;
> +            xsplice_work.do_work = 0;
> +            return;

This error path leaves a ref in the semaphore.

> +        }
> +
> +        barrier(); /* MUST do it after get_cpu_maps. */
> +        total_cpus = num_online_cpus() - 1;
> +
> +        if ( total_cpus )
> +        {
> +            printk(XENLOG_DEBUG "%s: CPU%u - IPIing the %u CPUs.\n", p->name,
> +                   cpu, total_cpus);
> +            smp_call_function(reschedule_fn, NULL, 0);
> +        }
> +        (void)xsplice_do_single(total_cpus);
> +
> +        ASSERT(local_irq_is_enabled());
> +
> +        put_cpu_maps();
> +
> +        printk(XENLOG_DEBUG "%s finished with rc=%d\n", p->name, p->rc);
> +    }
> +    else
> +    {
> +        /* Wait for all CPUs to rendezvous. */
> +        while ( xsplice_work.do_work && !xsplice_work.ready )
> +        {
> +            cpu_relax();
> +            smp_rmb();
> +        }
> +

What happens here if the rendezvous initiator times out?  Looks like we
will spin forever waiting for do_work which will never drop back to 0.

> +        /* Disable IRQs and signal. */
> +        local_irq_disable();
> +        atomic_inc(&xsplice_work.irq_semaphore);
> +
> +        /* Wait for patching to complete. */
> +        while ( xsplice_work.do_work )
> +        {
> +            cpu_relax();
> +            smp_rmb();
> +        }
> +        local_irq_enable();

Splitting the modification of do_work and ready across multiple
functions makes it particularly hard to reason about the correctness of
the rendezvous.  It would be better to have a xsplice_rendezvous()
function whose purpose was to negotiate the rendezvous only, using local
static state.  The action can then be just the switch() from
xsplice_do_single().

> +    }
> +}
> +
> diff --git a/xen/include/asm-arm/nmi.h b/xen/include/asm-arm/nmi.h
> index a60587e..82aff35 100644
> --- a/xen/include/asm-arm/nmi.h
> +++ b/xen/include/asm-arm/nmi.h
> @@ -4,6 +4,19 @@
>  #define register_guest_nmi_callback(a)  (-ENOSYS)
>  #define unregister_guest_nmi_callback() (-ENOSYS)
>  
> +typedef int (*nmi_callback_t)(const struct cpu_user_regs *regs, int cpu);
> +
> +/**
> + * set_nmi_callback
> + *
> + * Set a handler for an NMI. Only one handler may be
> + * set. Return the old nmi callback handler.
> + */
> +static inline nmi_callback_t set_nmi_callback(nmi_callback_t callback)
> +{
> +    return NULL;
> +}
> +

This addition suggests that there should probably be an
arch_xsplice_prepair_rendezvous() and arch_xsplice_finish_rendezvous().

~Andrew

  reply	other threads:[~2016-02-16 19:11 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-12 18:05 [PATCH v3] xSplice v1 implementation and design Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 01/23] xen/xsplice: Hypervisor implementation of XEN_XSPLICE_op (v10) Konrad Rzeszutek Wilk
2016-02-12 20:11   ` Andrew Cooper
2016-02-12 20:40     ` Konrad Rzeszutek Wilk
2016-02-12 20:53       ` Andrew Cooper
2016-02-15  8:16       ` Jan Beulich
2016-02-19 19:36     ` Konrad Rzeszutek Wilk
2016-02-19 19:43       ` Andrew Cooper
2016-02-12 18:05 ` [PATCH v3 02/23] libxc: Implementation of XEN_XSPLICE_op in libxc (v5) Konrad Rzeszutek Wilk
2016-02-15 12:35   ` Wei Liu
2016-02-19 20:04     ` Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 03/23] xen-xsplice: Tool to manipulate xsplice payloads (v4) Konrad Rzeszutek Wilk
2016-02-15 12:59   ` Wei Liu
2016-02-19 20:46     ` Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 04/23] elf: Add relocation types to elfstructs.h Konrad Rzeszutek Wilk
2016-02-12 20:13   ` Andrew Cooper
2016-02-15  8:34   ` Jan Beulich
2016-02-19 21:05     ` Konrad Rzeszutek Wilk
2016-02-22 10:17       ` Jan Beulich
2016-02-22 15:19       ` Ross Lagerwall
2016-02-12 18:05 ` [PATCH v3 05/23] xsplice: Add helper elf routines (v4) Konrad Rzeszutek Wilk
2016-02-12 20:24   ` Andrew Cooper
2016-02-12 20:47     ` Konrad Rzeszutek Wilk
2016-02-12 20:52       ` Andrew Cooper
2016-02-12 18:05 ` [PATCH v3 06/23] xsplice: Implement payload loading (v4) Konrad Rzeszutek Wilk
2016-02-12 20:48   ` Andrew Cooper
2016-02-19 22:03     ` Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 07/23] xsplice: Implement support for applying/reverting/replacing patches. (v5) Konrad Rzeszutek Wilk
2016-02-16 19:11   ` Andrew Cooper [this message]
2016-02-17  8:58     ` Ross Lagerwall
2016-02-17 10:50     ` Jan Beulich
2016-02-19  9:30     ` Ross Lagerwall
2016-02-23 20:41     ` Konrad Rzeszutek Wilk
2016-02-23 20:53       ` Konrad Rzeszutek Wilk
2016-02-23 20:57       ` Konrad Rzeszutek Wilk
2016-02-23 21:10       ` Andrew Cooper
2016-02-24  9:31         ` Jan Beulich
2016-02-22 15:00   ` Ross Lagerwall
2016-02-22 17:06     ` Ross Lagerwall
2016-02-23 20:47       ` Konrad Rzeszutek Wilk
2016-02-23 20:43     ` Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 08/23] x86/xen_hello_world.xsplice: Test payload for patching 'xen_extra_version'. (v2) Konrad Rzeszutek Wilk
2016-02-16 11:31   ` Ross Lagerwall
2016-02-12 18:05 ` [PATCH v3 09/23] xsplice: Add support for bug frames. (v4) Konrad Rzeszutek Wilk
2016-02-16 19:35   ` Andrew Cooper
2016-02-24 16:22     ` Konrad Rzeszutek Wilk
2016-02-24 16:30       ` Andrew Cooper
2016-02-24 16:26     ` Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 10/23] xsplice: Add support for exception tables. (v2) Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 11/23] xsplice: Add support for alternatives Konrad Rzeszutek Wilk
2016-02-16 19:41   ` Andrew Cooper
2016-02-12 18:05 ` [PATCH v3 12/23] xsm/xen_version: Add XSM for the xen_version hypercall (v8) Konrad Rzeszutek Wilk
2016-02-12 21:52   ` Daniel De Graaf
2016-02-12 18:05 ` [PATCH v3 13/23] XENVER_build_id: Provide ld-embedded build-ids (v10) Konrad Rzeszutek Wilk
2016-02-12 21:52   ` Daniel De Graaf
2016-02-16 20:09   ` Andrew Cooper
2016-02-16 20:22     ` Konrad Rzeszutek Wilk
2016-02-16 20:26       ` Andrew Cooper
2016-02-16 20:40         ` Konrad Rzeszutek Wilk
2016-02-24 18:52     ` Konrad Rzeszutek Wilk
2016-02-24 19:13       ` Andrew Cooper
2016-02-24 20:54         ` Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 14/23] libxl: info: Display build_id of the hypervisor Konrad Rzeszutek Wilk
2016-02-15 12:45   ` Wei Liu
2016-02-12 18:05 ` [PATCH v3 15/23] xsplice: Print build_id in keyhandler Konrad Rzeszutek Wilk
2016-02-16 20:13   ` Andrew Cooper
2016-02-12 18:05 ` [PATCH v3 16/23] xsplice: basic build-id dependency checking Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 17/23] xsplice: Print dependency and payloads build_id in the keyhandler Konrad Rzeszutek Wilk
2016-02-16 20:20   ` Andrew Cooper
2016-02-17 11:10     ` Jan Beulich
2016-02-24 21:54       ` Konrad Rzeszutek Wilk
2016-02-25  8:47         ` Jan Beulich
2016-02-12 18:05 ` [PATCH v3 18/23] xsplice: Prevent duplicate payloads to be loaded Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 19/23] xsplice, symbols: Implement symbol name resolution on address. (v2) Konrad Rzeszutek Wilk
2016-02-22 14:57   ` Ross Lagerwall
2016-02-12 18:05 ` [PATCH v3 20/23] x86, xsplice: Print payload's symbol name and module in backtraces Konrad Rzeszutek Wilk
2016-02-12 18:05 ` [PATCH v3 21/23] xsplice: Add support for shadow variables Konrad Rzeszutek Wilk
2016-03-07  7:40   ` Martin Pohlack
2016-03-15 18:02     ` Konrad Rzeszutek Wilk
2016-03-07 18:52   ` Martin Pohlack
2016-02-12 18:06 ` [PATCH v3 22/23] xsplice: Add hooks functions and other macros Konrad Rzeszutek Wilk
2016-02-12 18:06 ` [PATCH v3 23/23] xsplice, hello_world: Use the XSPLICE_[UN|]LOAD_HOOK hooks for two functions Konrad Rzeszutek Wilk
2016-02-12 21:57 ` [PATCH v3] xSplice v1 implementation and design Konrad Rzeszutek Wilk
2016-02-12 21:57   ` [PATCH v3 MISSING/23] xsplice: Design document (v7) Konrad Rzeszutek Wilk
2016-02-18 16:20     ` Jan Beulich
2016-02-19 18:36       ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56C3744E.8000702@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=Aravind.Gopalakrishnan@amd.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=ian.campbell@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jinsong.liu@alibaba-inc.com \
    --cc=jun.nakajima@intel.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=konrad.wilk@oracle.com \
    --cc=konrad@kernel.org \
    --cc=mpohlack@amazon.de \
    --cc=ross.lagerwall@citrix.com \
    --cc=sasha.levin@citrix.com \
    --cc=stefano.stabellini@citrix.com \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=xen-devel@lists.xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).