From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48748) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YTY8a-000451-Vy for qemu-devel@nongnu.org; Thu, 05 Mar 2015 10:56:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YTY8X-0007fV-0q for qemu-devel@nongnu.org; Thu, 05 Mar 2015 10:56:36 -0500 Received: from e06smtp12.uk.ibm.com ([195.75.94.108]:37126) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YTY8W-0007fM-Ni for qemu-devel@nongnu.org; Thu, 05 Mar 2015 10:56:32 -0500 Received: from /spool/local by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 5 Mar 2015 15:56:31 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196]) by d06dlp03.portsmouth.uk.ibm.com (Postfix) with ESMTP id A44BC1B0805F for ; Thu, 5 Mar 2015 15:56:47 +0000 (GMT) Received: from d06av01.portsmouth.uk.ibm.com (d06av01.portsmouth.uk.ibm.com [9.149.37.212]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t25FuTxv38469692 for ; Thu, 5 Mar 2015 15:56:29 GMT Received: from d06av01.portsmouth.uk.ibm.com (localhost [127.0.0.1]) by d06av01.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t25FuSwh006654 for ; Thu, 5 Mar 2015 08:56:29 -0700 From: Jens Freimann Date: Thu, 5 Mar 2015 16:56:21 +0100 Message-Id: <1425570981-40609-3-git-send-email-jfrei@linux.vnet.ibm.com> In-Reply-To: <1425570981-40609-1-git-send-email-jfrei@linux.vnet.ibm.com> References: <1425570981-40609-1-git-send-email-jfrei@linux.vnet.ibm.com> Subject: [Qemu-devel] [PATCH 2/2] s390x/kvm: passing max memory size to accelerator List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Christian Borntraeger , Alexander Graf , Cornelia Huck Cc: Jens Freimann , qemu-devel@nongnu.org, Dominik Dingel From: Dominik Dingel With "KVM: s390: Allow userspace to limit guest memory size" KVM is able to do some optimizations based on the guest memory limit. The guest memory limit is computed by the initial definition and with the notion of hotplugged memory. Reviewed-by: Thomas Huth Reviewed-by: Guenther Hutzl Reviewed-by: David Hildenbrand Signed-off-by: Dominik Dingel Signed-off-by: Jens Freimann --- hw/s390x/s390-virtio-ccw.c | 10 ++++++++++ target-s390x/cpu.h | 14 ++++++++++++++ target-s390x/kvm.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 69 insertions(+) diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c index 078371a..eea0742 100644 --- a/hw/s390x/s390-virtio-ccw.c +++ b/hw/s390x/s390-virtio-ccw.c @@ -97,6 +97,7 @@ static void ccw_init(MachineState *machine) ram_addr_t pad_size = 0; ram_addr_t maxmem = qemu_opt_get_size(opts, "maxmem", my_ram_size); ram_addr_t standby_mem_size = maxmem - my_ram_size; + uint64_t kvm_limit; /* The storage increment size is a multiple of 1M and is a power of 2. * The number of storage increments must be MAX_STORAGE_INCREMENTS or fewer. @@ -121,6 +122,15 @@ static void ccw_init(MachineState *machine) /* let's propagate the changed ram size into the global variable. */ ram_size = my_ram_size; + machine->maxram_size = my_ram_size + standby_mem_size; + + ret = s390_set_memory_limit(machine->maxram_size, &kvm_limit); + if (ret == -E2BIG) { + hw_error("qemu: host supports a maximum of %" PRIu64 " GB", + kvm_limit >> 30); + } else if (ret) { + hw_error("qemu: setting the guest size failed"); + } /* get a BUS */ css_bus = virtual_css_bus_init(); diff --git a/target-s390x/cpu.h b/target-s390x/cpu.h index be53b5a..f89e1e3 100644 --- a/target-s390x/cpu.h +++ b/target-s390x/cpu.h @@ -1062,6 +1062,7 @@ int kvm_s390_get_memslot_count(KVMState *s); void kvm_s390_clear_cmma_callback(void *opaque); int kvm_s390_set_cpu_state(S390CPU *cpu, uint8_t cpu_state); void kvm_s390_reset_vcpu(S390CPU *cpu); +int kvm_s390_set_mem_limit(KVMState *s, uint64_t new_limit, uint64_t *hw_limit); #else static inline void kvm_s390_io_interrupt(uint16_t subchannel_id, uint16_t subchannel_nr, @@ -1099,8 +1100,21 @@ static inline int kvm_s390_set_cpu_state(S390CPU *cpu, uint8_t cpu_state) static inline void kvm_s390_reset_vcpu(S390CPU *cpu) { } +static inline int kvm_s390_set_mem_limit(KVMState *s, uint64_t new_limit, + uint64_t *hw_limit) +{ + return 0; +} #endif +static inline int s390_set_memory_limit(uint64_t new_limit, uint64_t *hw_limit) +{ + if (kvm_enabled()) { + return kvm_s390_set_mem_limit(kvm_state, new_limit, hw_limit); + } + return 0; +} + static inline void cmma_reset(S390CPU *cpu) { if (kvm_enabled()) { diff --git a/target-s390x/kvm.c b/target-s390x/kvm.c index 6ef714c..f4bcca1 100644 --- a/target-s390x/kvm.c +++ b/target-s390x/kvm.c @@ -122,6 +122,51 @@ static int cap_async_pf; static void *legacy_s390_alloc(size_t size, uint64_t *align); +static int kvm_s390_supports_mem_limit(KVMState *s) +{ + struct kvm_device_attr attr = { + .group = KVM_S390_VM_MEM_CTRL, + .attr = KVM_S390_VM_MEM_LIMIT_SIZE, + }; + + return (kvm_vm_ioctl(s, KVM_HAS_DEVICE_ATTR, &attr) == 0); +} + +static int kvm_s390_query_mem_limit(KVMState *s, uint64_t *memory_limit) +{ + struct kvm_device_attr attr = { + .group = KVM_S390_VM_MEM_CTRL, + .attr = KVM_S390_VM_MEM_LIMIT_SIZE, + .addr = (uint64_t) memory_limit, + }; + + return kvm_vm_ioctl(s, KVM_GET_DEVICE_ATTR, &attr); +} + +int kvm_s390_set_mem_limit(KVMState *s, uint64_t new_limit, uint64_t *hw_limit) +{ + int rc; + + struct kvm_device_attr attr = { + .group = KVM_S390_VM_MEM_CTRL, + .attr = KVM_S390_VM_MEM_LIMIT_SIZE, + .addr = (uint64_t) &new_limit, + }; + + if (!kvm_s390_supports_mem_limit(s)) { + return 0; + } + + rc = kvm_s390_query_mem_limit(s, hw_limit); + if (rc) { + return rc; + } else if (*hw_limit < new_limit) { + return -E2BIG; + } + + return kvm_vm_ioctl(s, KVM_SET_DEVICE_ATTR, &attr); +} + static int kvm_s390_check_clear_cmma(KVMState *s) { struct kvm_device_attr attr = { -- 2.1.4