qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Cc: ehabkost@redhat.com, kvm@vger.kernel.org, gleb@kernel.org,
	mtosatti@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com,
	imammedo@redhat.com, pbonzini@redhat.com,
	dan.j.williams@intel.com, rth@twiddle.net
Subject: Re: [Qemu-devel] [PATCH 8/9] nvdimm acpi: emulate dsm method
Date: Wed, 2 Mar 2016 08:36:10 +0200	[thread overview]
Message-ID: <20160302081841-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <56D65E42.1050108@linux.intel.com>

On Wed, Mar 02, 2016 at 11:30:10AM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/02/2016 01:09 AM, Michael S. Tsirkin wrote:
> 
> >
> >Can't guest trigger this?
> >If yes, don't put such code in production please:
> >this will fill up disk on the host.
> >
> 
> Okay, the evil guest can read the IO port freely. I will use nvdimm_debug() instead.
> 
> >
> >>
> >>  static void
> >>  nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
> >>  {
> >>+    NvdimmDsmIn *in;
> >>+    GArray *out;
> >>+    uint32_t buf_size;
> >>+    hwaddr dsm_mem_addr = val;
> >>+
> >>+    nvdimm_debug("dsm memory address %#lx.\n", dsm_mem_addr);
> >>+
> >>+    /*
> >>+     * The DSM memory is mapped to guest address space so an evil guest
> >>+     * can change its content while we are doing DSM emulation. Avoid
> >>+     * this by copying DSM memory to QEMU local memory.
> >>+     */
> >>+    in = g_malloc(TARGET_PAGE_SIZE);

ugh. manual memory management :(

> >>+    cpu_physical_memory_read(dsm_mem_addr, in, TARGET_PAGE_SIZE);

is there a requirement address is aligned?
if not this might cross page and crash qemu.
better read just what you need.

> >>+
> >>+    le32_to_cpus(&in->revision);
> >>+    le32_to_cpus(&in->function);
> >>+    le32_to_cpus(&in->handle);
> >>+
> >>+    nvdimm_debug("Revision %#x Handler %#x Function %#x.\n", in->revision,
> >>+                 in->handle, in->function);
> >>+
> >>+    out = g_array_new(false, true /* clear */, 1);

export build_alloc_array then, and reuse?

> >>+
> >>+    /*
> >>+     * function 0 is called to inquire what functions are supported by
> >>+     * OSPM
> >>+     */
> >>+    if (in->function == 0) {
> >>+        build_append_int_noprefix(out, 0 /* No function Supported */,
> >>+                                  sizeof(uint8_t));

What does this mean? Same comment here and below ...


> >>+    } else {
> >>+        /* No function is supported yet. */
> >>+        build_append_int_noprefix(out, 1 /* Not Supported */,
> >>+                                  sizeof(uint8_t));
> >>+    }
> >>+
> >>+    buf_size = cpu_to_le32(out->len);
> >>+    cpu_physical_memory_write(dsm_mem_addr, &buf_size, sizeof(buf_size));
> >
> >is there a race here?
> >can guest read this before data is written?
> 
> I think no.
> 
> It is the SERIALIZED DSM so there is no race in guest. And the CPU has exited
> from guest mode when we fill the buffer in the same CPU-context so the guest
> can not read the buffer at this point also memory-barrier is not needed here.
> 
> >
> >>+    cpu_physical_memory_write(dsm_mem_addr + sizeof(buf_size), out->data,
> >>+                              out->len);
> >
> >What is this doing?
> >Is this actually writing AML bytecode into guest memory?
> 
> The layout of result written into the buffer is like this:
> struct NvdimmDsmOut {
>     /* the size of buffer filled by QEMU. */
>     uint32_t len;
>     uint8_t data[0];
> } QEMU_PACKED;
> typedef struct NvdimmDsmOut NvdimmDsmOut;
> 
> So the first cpu_physical_memory_write() writes the @len and the second one you
> pointed out writes the real payload.


So either write a function that gets parameters and formats
buffer, or use a structure to do this.
Do not open-code formatting and don't mess with
offsets.

E.g.

struct NvdimmDsmFunc0Out {
     /* the size of buffer filled by QEMU. */
     uint32_t len;
     uint8_t supported;
} QEMU_PACKED;
typedef struct NvdimmDsmFunc0Out NvdimmDsmFunc0Out;


And now

NvdimmDsmFunc0Out func0 = { .len = cpu_to_le32(sizeof(func0)); suppported = func == 0; };

cpu_physical_memory_write(dsm_mem_addr, &func0, sizeof func0);


Or if you really insist on using GArray:

build_dsm_out_func0(int function...)
{
    uint32_t len;
    uint8_t result;

    len = sizeof result;
    if (function == 0) {
        result = 0 /* No function Supported */;
   } else {
        /* No function is supported yet. */
        result = 1 /* Not Supported */;
   }

    build_append_int_noprefix(out, len, sizeof len);
    build_append_int_noprefix(out, result, sizeof result);

    assert(out->len < PAGE_SIZE); - is this right?
    cpu_physical_memory_write(dsm_mem_addr, out->data,
                              out->len);
}


but I prefer the former ...

-- 
MST

  reply	other threads:[~2016-03-02  6:36 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-01 10:56 [Qemu-devel] [PATCH v4 0/9] NVDIMM ACPI: introduce the framework of QEMU emulated Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 1/9] acpi: add aml_create_field() Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 2/9] acpi: add aml_concatenate() Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 3/9] acpi: allow using object as offset for OperationRegion Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 4/9] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 5/9] acpi: add build_append_named_dword, returning an offset in buffer Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 6/9] nvdimm acpi: introduce patched dsm memory Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 7/9] nvdimm acpi: let qemu handle _DSM method Xiao Guangrong
2016-03-01 10:56 ` [Qemu-devel] [PATCH 8/9] nvdimm acpi: emulate dsm method Xiao Guangrong
2016-03-01 17:09   ` Michael S. Tsirkin
2016-03-02  3:30     ` Xiao Guangrong
2016-03-02  6:36       ` Michael S. Tsirkin [this message]
2016-03-02  7:15         ` Xiao Guangrong
2016-03-02  7:20           ` Michael S. Tsirkin
2016-03-02  7:29             ` Xiao Guangrong
2016-03-02  8:44               ` Michael S. Tsirkin
2016-03-02  9:05                 ` Xiao Guangrong
2016-03-02  7:21           ` Xiao Guangrong
2016-03-01 17:12   ` Michael S. Tsirkin
2016-03-02  4:00     ` Xiao Guangrong
2016-03-02  6:38       ` Michael S. Tsirkin
2016-03-01 10:56 ` [Qemu-devel] [PATCH 9/9] nvdimm acpi: add _CRS Xiao Guangrong
2016-03-01 16:36 ` [Qemu-devel] [PATCH v4 0/9] NVDIMM ACPI: introduce the framework of QEMU emulated Michael S. Tsirkin
2016-03-02  4:06   ` Xiao Guangrong
2016-03-01 18:38 ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160302081841-mutt-send-email-mst@redhat.com \
    --to=mst@redhat.com \
    --cc=dan.j.williams@intel.com \
    --cc=ehabkost@redhat.com \
    --cc=gleb@kernel.org \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=imammedo@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).