qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	"Paul Durrant" <paul@xen.org>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	"Ankur Arora" <ankur.a.arora@oracle.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Thomas Huth" <thuth@redhat.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Juan Quintela" <quintela@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	"Claudio Fontana" <cfontana@suse.de>
Subject: [RFC PATCH v2 11/22] i386/xen: implement HYPERCALL_xen_version
Date: Fri,  9 Dec 2022 09:56:01 +0000	[thread overview]
Message-ID: <20221209095612.689243-12-dwmw2@infradead.org> (raw)
In-Reply-To: <20221209095612.689243-1-dwmw2@infradead.org>

From: Joao Martins <joao.m.martins@oracle.com>

This is just meant to serve as an example on how we can implement
hypercalls. xen_version specifically since Qemu does all kind of
feature controllability. So handling that here seems appropriate.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
[dwmw2: Implement kvm_gva_rw() safely]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 target/i386/xen.c | 79 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/target/i386/xen.c b/target/i386/xen.c
index 708ab908a0..55beed1913 100644
--- a/target/i386/xen.c
+++ b/target/i386/xen.c
@@ -12,9 +12,51 @@
 #include "qemu/osdep.h"
 #include "qemu/log.h"
 #include "kvm/kvm_i386.h"
+#include "exec/address-spaces.h"
 #include "xen.h"
 #include "trace.h"
 
+#include "standard-headers/xen/version.h"
+
+static int kvm_gva_rw(CPUState *cs, uint64_t gva, void *_buf, size_t sz,
+                      bool is_write)
+{
+    uint8_t *buf = (uint8_t *)_buf;
+    size_t i = 0, len = 0;
+    int ret;
+
+    for (i = 0; i < sz; i+= len) {
+        struct kvm_translation tr = {
+            .linear_address = gva + i,
+        };
+
+        len = TARGET_PAGE_SIZE - (tr.linear_address & ~TARGET_PAGE_MASK);
+        if (len > sz)
+            len = sz;
+
+        ret = kvm_vcpu_ioctl(cs, KVM_TRANSLATE, &tr);
+        if (ret || !tr.valid || (is_write && !tr.writeable)) {
+            return -EFAULT;
+        }
+
+        cpu_physical_memory_rw(tr.physical_address, buf + i, len, is_write);
+    }
+
+    return 0;
+}
+
+static inline int kvm_copy_from_gva(CPUState *cs, uint64_t gva, void *buf,
+                                    size_t sz)
+{
+    return kvm_gva_rw(cs, gva, buf, sz, false);
+}
+
+static inline int kvm_copy_to_gva(CPUState *cs, uint64_t gva, void *buf,
+                                  size_t sz)
+{
+    return kvm_gva_rw(cs, gva, buf, sz, false);
+}
+
 int kvm_xen_init(KVMState *s, uint32_t xen_version)
 {
     const int required_caps = KVM_XEN_HVM_CONFIG_HYPERCALL_MSR |
@@ -50,6 +92,40 @@ int kvm_xen_init(KVMState *s, uint32_t xen_version)
     return 0;
 }
 
+static bool kvm_xen_hcall_xen_version(struct kvm_xen_exit *exit, X86CPU *cpu,
+                                     int cmd, uint64_t arg)
+{
+    int err = 0;
+
+    switch (cmd) {
+    case XENVER_get_features: {
+        struct xen_feature_info fi;
+
+        err = kvm_copy_from_gva(CPU(cpu), arg, &fi, sizeof(fi));
+        if (err) {
+            break;
+        }
+
+        fi.submap = 0;
+        if (fi.submap_idx == 0) {
+            fi.submap |= 1 << XENFEAT_writable_page_tables |
+                         1 << XENFEAT_writable_descriptor_tables |
+                         1 << XENFEAT_auto_translated_physmap |
+                         1 << XENFEAT_supervisor_mode_kernel;
+        }
+
+        err = kvm_copy_to_gva(CPU(cpu), arg, &fi, sizeof(fi));
+        break;
+    }
+
+    default:
+            return false;
+    }
+
+    exit->u.hcall.result = err;
+    return true;
+}
+
 static bool __kvm_xen_handle_exit(X86CPU *cpu, struct kvm_xen_exit *exit)
 {
     uint16_t code = exit->u.hcall.input;
@@ -60,6 +136,9 @@ static bool __kvm_xen_handle_exit(X86CPU *cpu, struct kvm_xen_exit *exit)
     }
 
     switch (code) {
+    case __HYPERVISOR_xen_version:
+        return kvm_xen_hcall_xen_version(exit, cpu, exit->u.hcall.params[0],
+                                         exit->u.hcall.params[1]);
     default:
         return false;
     }
-- 
2.35.3



  parent reply	other threads:[~2022-12-09 10:10 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-09  9:55 [RFC PATCH v2 00/22] Xen HVM support under KVM David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 01/22] include: import xen public headers David Woodhouse
2022-12-12  9:17   ` Paul Durrant
2022-12-09  9:55 ` [RFC PATCH v2 02/22] xen: add CONFIG_XENFV_MACHINE and CONFIG_XEN_EMU options for Xen emulation David Woodhouse
2022-12-12  9:19   ` Paul Durrant
2022-12-12 17:07   ` Paolo Bonzini
2022-12-12 22:22     ` David Woodhouse
2022-12-13  0:39       ` Paolo Bonzini
2022-12-13  0:59         ` David Woodhouse
2022-12-13 22:32           ` Paolo Bonzini
2022-12-16  8:40             ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 03/22] i386/xen: Add xen-version machine property and init KVM Xen support David Woodhouse
2022-12-12 12:48   ` Paul Durrant
2022-12-12 17:30   ` Paolo Bonzini
2022-12-12 17:55     ` Paul Durrant
2022-12-13  0:13     ` David Woodhouse
2023-01-17 13:49     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 04/22] i386/kvm: handle Xen HVM cpuid leaves David Woodhouse
2022-12-12 13:13   ` Paul Durrant
2022-12-13  9:47     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 05/22] xen-platform-pci: allow its creation with XEN_EMULATE mode David Woodhouse
2022-12-12 13:24   ` Paul Durrant
2022-12-12 22:07     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 06/22] hw/xen_backend: refactor xen_be_init() David Woodhouse
2022-12-12 13:27   ` Paul Durrant
2022-12-09  9:55 ` [RFC PATCH v2 07/22] pc_piix: handle XEN_EMULATE backend init David Woodhouse
2022-12-12 13:47   ` Paul Durrant
2022-12-12 14:50     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 08/22] xen_platform: exclude vfio-pci from the PCI platform unplug David Woodhouse
2022-12-12 13:52   ` Paul Durrant
2022-12-09  9:55 ` [RFC PATCH v2 09/22] pc_piix: allow xenfv machine with XEN_EMULATE David Woodhouse
2022-12-12 14:05   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 10/22] i386/xen: handle guest hypercalls David Woodhouse
2022-12-12 14:11   ` Paul Durrant
2022-12-12 14:17     ` David Woodhouse
2022-12-12 17:07   ` Paolo Bonzini
2022-12-09  9:56 ` David Woodhouse [this message]
2022-12-12 14:17   ` [RFC PATCH v2 11/22] i386/xen: implement HYPERCALL_xen_version Paul Durrant
2022-12-13  0:06     ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 12/22] hw/xen: Add xen_overlay device for emulating shared xenheap pages David Woodhouse
2022-12-12 14:29   ` Paul Durrant
2022-12-12 17:14   ` Paolo Bonzini
2022-12-09  9:56 ` [RFC PATCH v2 13/22] i386/xen: implement HYPERVISOR_memory_op David Woodhouse
2022-12-12 14:38   ` Paul Durrant
2022-12-13  0:08     ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 14/22] i386/xen: implement HYPERVISOR_hvm_op David Woodhouse
2022-12-12 14:41   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 15/22] i386/xen: implement HYPERVISOR_vcpu_op David Woodhouse
2022-12-12 14:51   ` Paul Durrant
2022-12-13  0:10     ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 16/22] i386/xen: handle VCPUOP_register_vcpu_info David Woodhouse
2022-12-12 14:58   ` Paul Durrant
2022-12-13  0:13     ` David Woodhouse
2022-12-14 10:28       ` Paul Durrant
2022-12-14 11:04         ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 17/22] i386/xen: handle VCPUOP_register_vcpu_time_info David Woodhouse
2022-12-12 15:34   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 18/22] i386/xen: handle VCPUOP_register_runstate_memory_area David Woodhouse
2022-12-12 15:38   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 19/22] i386/xen: implement HVMOP_set_evtchn_upcall_vector David Woodhouse
2022-12-12 15:52   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 20/22] i386/xen: HVMOP_set_param / HVM_PARAM_CALLBACK_IRQ David Woodhouse
2022-12-12 16:16   ` Paul Durrant
2022-12-12 16:26     ` David Woodhouse
2022-12-12 16:39       ` Paul Durrant
2022-12-15 20:54         ` David Woodhouse
2022-12-20 13:56           ` Paul Durrant
2022-12-20 16:27             ` David Woodhouse
2022-12-20 17:25               ` Paul Durrant
2022-12-20 17:29                 ` David Woodhouse
2022-12-28 10:45                   ` David Woodhouse
2022-12-21  1:41     ` David Woodhouse
2022-12-21  9:37       ` Paul Durrant
2022-12-21 12:16         ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 21/22] i386/xen: implement HYPERVISOR_event_channel_op David Woodhouse
2022-12-12 16:23   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 22/22] i386/xen: implement HYPERVISOR_sched_op David Woodhouse
2022-12-12 16:37   ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221209095612.689243-12-dwmw2@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=alex.bennee@linaro.org \
    --cc=ankur.a.arora@oracle.com \
    --cc=cfontana@suse.de \
    --cc=dgilbert@redhat.com \
    --cc=joao.m.martins@oracle.com \
    --cc=paul@xen.org \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).