xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Haozhong Zhang <haozhong.zhang@intel.com>,
	xen-devel@lists.xensource.com,
	Xiao Guangrong <guangrong.xiao@linux.intel.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Anthony.Perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [PATCH 2/2] pc-nvdimm acpi: build ACPI tables for pc-nvdimm devices
Date: Mon, 4 Jan 2016 16:10:55 -0500	[thread overview]
Message-ID: <20160104211055.GA23242@char.us.oracle.com> (raw)
In-Reply-To: <alpine.DEB.2.02.1601041558270.27577@kaball.uk.xensource.com>

On Mon, Jan 04, 2016 at 04:01:08PM +0000, Stefano Stabellini wrote:
> CC'ing the Xen tools maintainers and Anthony.
> 
> On Tue, 29 Dec 2015, Haozhong Zhang wrote:
> > Reuse existing NVDIMM ACPI code to build ACPI tables for pc-nvdimm
> > devices. The resulting tables are then copied into Xen guest domain so
> > tha they can be later loaded by Xen hvmloader.
> > 
> > Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
> 
> How much work would it be to generate the nvdimm acpi tables from the
> Xen toolstack?

Why duplicate the code? The QEMU generates the NFIT tables and its sub-tables.
> 
> Getting the tables from QEMU doesn't seem like a good idea to me, unless
> we start getting the whole set of ACPI tables that way.

There is also the ACPI DSDT code - which requires an memory region
to be reserved for the AML code to drop the parameters so that QEMU
can scan the NVDIMM for failures. The region (and size) should be
determined by QEMU since it works on this data.


> 
> 
> >  hw/acpi/nvdimm.c     |  5 +++-
> >  hw/i386/pc.c         |  6 ++++-
> >  include/hw/xen/xen.h |  2 ++
> >  xen-hvm.c            | 71 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  4 files changed, 82 insertions(+), 2 deletions(-)
> > 
> > diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> > index df1b176..7c4b931 100644
> > --- a/hw/acpi/nvdimm.c
> > +++ b/hw/acpi/nvdimm.c
> > @@ -29,12 +29,15 @@
> >  #include "hw/acpi/acpi.h"
> >  #include "hw/acpi/aml-build.h"
> >  #include "hw/mem/nvdimm.h"
> > +#include "hw/mem/pc-nvdimm.h"
> > +#include "hw/xen/xen.h"
> >  
> >  static int nvdimm_plugged_device_list(Object *obj, void *opaque)
> >  {
> >      GSList **list = opaque;
> > +    const char *type_name = xen_enabled() ? TYPE_PC_NVDIMM : TYPE_NVDIMM;
> >  
> > -    if (object_dynamic_cast(obj, TYPE_NVDIMM)) {
> > +    if (object_dynamic_cast(obj, type_name)) {
> >          DeviceState *dev = DEVICE(obj);
> >  
> >          if (dev->realized) { /* only realized NVDIMMs matter */
> > diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> > index 459260b..fadacf5 100644
> > --- a/hw/i386/pc.c
> > +++ b/hw/i386/pc.c
> > @@ -1186,7 +1186,11 @@ void pc_guest_info_machine_done(Notifier *notifier, void *data)
> >          }
> >      }
> >  
> > -    acpi_setup(&guest_info_state->info);
> > +    if (!xen_enabled()) {
> > +        acpi_setup(&guest_info_state->info);
> > +    } else if (xen_hvm_acpi_setup(PC_MACHINE(qdev_get_machine()))) {
> > +        error_report("Warning: failed to initialize Xen HVM ACPI tables");
> > +    }
> >  }
> >  
> >  PcGuestInfo *pc_guest_info_init(PCMachineState *pcms)
> > diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
> > index e90931a..8b705e1 100644
> > --- a/include/hw/xen/xen.h
> > +++ b/include/hw/xen/xen.h
> > @@ -51,4 +51,6 @@ void xen_register_framebuffer(struct MemoryRegion *mr);
> >  #  define HVM_MAX_VCPUS 32
> >  #endif
> >  
> > +int xen_hvm_acpi_setup(PCMachineState *pcms);
> > +
> >  #endif /* QEMU_HW_XEN_H */
> > diff --git a/xen-hvm.c b/xen-hvm.c
> > index 6ebf43f..f1f5e77 100644
> > --- a/xen-hvm.c
> > +++ b/xen-hvm.c
> > @@ -26,6 +26,13 @@
> >  #include <xen/hvm/params.h>
> >  #include <xen/hvm/e820.h>
> >  
> > +#include "qemu/error-report.h"
> > +#include "hw/acpi/acpi.h"
> > +#include "hw/acpi/aml-build.h"
> > +#include "hw/acpi/bios-linker-loader.h"
> > +#include "hw/mem/nvdimm.h"
> > +#include "hw/mem/pc-nvdimm.h"
> > +
> >  //#define DEBUG_XEN_HVM
> >  
> >  #ifdef DEBUG_XEN_HVM
> > @@ -1330,6 +1337,70 @@ int xen_hvm_init(PCMachineState *pcms,
> >      return 0;
> >  }
> >  
> > +int xen_hvm_acpi_setup(PCMachineState *pcms)
> > +{
> > +    AcpiBuildTables *hvm_acpi_tables;
> > +    GArray *tables_blob, *table_offsets;
> > +
> > +    ram_addr_t acpi_tables_addr, acpi_tables_size;
> > +    void *host;
> > +
> > +    struct xs_handle *xs = NULL;
> > +    char path[80], value[17];
> > +
> > +    if (!pcms->nvdimm) {
> > +        return 0;
> > +    }
> > +
> > +    hvm_acpi_tables = g_malloc0(sizeof(AcpiBuildTables));
> > +    if (!hvm_acpi_tables) {
> > +        return -1;
> > +    }
> > +    acpi_build_tables_init(hvm_acpi_tables);
> > +    tables_blob = hvm_acpi_tables->table_data;
> > +    table_offsets = g_array_new(false, true, sizeof(uint32_t));
> > +    bios_linker_loader_alloc(hvm_acpi_tables->linker,
> > +                             ACPI_BUILD_TABLE_FILE, 64, false);
> > +
> > +    /* build NFIT tables */
> > +    nvdimm_build_acpi(table_offsets, tables_blob, hvm_acpi_tables->linker);
> > +    g_array_free(table_offsets, true);
> > +
> > +    /* copy APCI tables into VM */
> > +    acpi_tables_size = tables_blob->len;
> > +    acpi_tables_addr =
> > +        (pcms->below_4g_mem_size - acpi_tables_size) & XC_PAGE_MASK;
> > +    host = xc_map_foreign_range(xen_xc, xen_domid,
> > +                                ROUND_UP(acpi_tables_size, XC_PAGE_SIZE),
> > +                                PROT_READ | PROT_WRITE,
> > +                                acpi_tables_addr >> XC_PAGE_SHIFT);
> > +    memcpy(host, tables_blob->data, acpi_tables_size);
> > +    munmap(host, ROUND_UP(acpi_tables_size, XC_PAGE_SIZE));
> > +
> > +    /* write address and size of ACPI tables to xenstore */
> > +    xs = xs_open(0);
> > +    if (xs == NULL) {
> > +        error_report("could not contact XenStore\n");
> > +        return -1;
> > +    }
> > +    snprintf(path, sizeof(path),
> > +             "/local/domain/%d/hvmloader/dm-acpi/address", xen_domid);
> > +    snprintf(value, sizeof(value), "%"PRIu64, (uint64_t) acpi_tables_addr);
> > +    if (!xs_write(xs, 0, path, value, strlen(value))) {
> > +        error_report("failed to write NFIT base address to xenstore\n");
> > +        return -1;
> > +    }
> > +    snprintf(path, sizeof(path),
> > +             "/local/domain/%d/hvmloader/dm-acpi/length", xen_domid);
> > +    snprintf(value, sizeof(value), "%"PRIu64, (uint64_t) acpi_tables_size);
> > +    if (!xs_write(xs, 0, path, value, strlen(value))) {
> > +        error_report("failed to write NFIT size to xenstore\n");
> > +        return -1;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> >  void destroy_hvm_domain(bool reboot)
> >  {
> >      XenXC xc_handle;
> > -- 
> > 2.4.8
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

  reply	other threads:[~2016-01-04 21:10 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-29 11:28 [PATCH 0/2] add vNVDIMM support for Xen Haozhong Zhang
2015-12-29 11:28 ` [PATCH 1/2] pc-nvdimm: implement pc-nvdimm device abstract Haozhong Zhang
2015-12-29 11:28 ` [PATCH 2/2] pc-nvdimm acpi: build ACPI tables for pc-nvdimm devices Haozhong Zhang
2016-01-04 16:01   ` Stefano Stabellini
2016-01-04 21:10     ` Konrad Rzeszutek Wilk [this message]
2016-01-05 11:00       ` Stefano Stabellini
2016-01-05 14:01         ` Haozhong Zhang
2016-01-06 14:50           ` Konrad Rzeszutek Wilk
2016-01-06 15:24             ` Haozhong Zhang
2016-01-05  2:14     ` Haozhong Zhang
2015-12-29 15:11 ` [PATCH 0/2] add vNVDIMM support for Xen Xiao Guangrong
2016-01-04 15:57   ` Stefano Stabellini
2016-01-05  1:22     ` Haozhong Zhang
2016-01-04 15:56 ` Stefano Stabellini
2016-01-05  1:33   ` Haozhong Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160104211055.GA23242@char.us.oracle.com \
    --to=konrad.wilk@oracle.com \
    --cc=Anthony.Perard@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=ehabkost@redhat.com \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=haozhong.zhang@intel.com \
    --cc=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rth@twiddle.net \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).