* [Qemu-devel] [RFC PATCH v3 0/3] s390: channel I/O support in qemu.
@ 2012-10-31 16:24 Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 1/3] Update linux headers Cornelia Huck
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Cornelia Huck @ 2012-10-31 16:24 UTC (permalink / raw)
To: KVM, linux-s390, qemu-devel
Cc: Carsten Otte, Anthony Liguori, Sebastian Ott, Marcelo Tosatti,
Heiko Carstens, Alexander Graf, Christian Borntraeger, Avi Kivity,
Martin Schwidefsky
Hi,
here's the latest version of my patchset introducing virtio-ccw.
This has been reworked with the changed kernel interface: qemu
will now handle all channel I/O requests (except the I/O interrupt
related ones that are handled in-kernel in the kvm case). This
avoids duplicating code in qemu and in kvm.
There are some misc fixes as well (mainly related to virtio-ccw).
Use of mutexes has hopefully been exorcised for now.
Unfortuately, patch 2 is now rather large - but I couldn't think
of a good way to split it up.
I still know of various things that need looking into (memory
accesses, for one), but I'd like some feedback about the new
interface first.
Cornelia Huck (3):
Update linux headers.
s390: Virtual channel subsystem support.
s390: Add new channel I/O based virtio transport.
hw/s390-virtio.c | 282 ++++++--
hw/s390x/Makefile.objs | 2 +
hw/s390x/css.c | 1209 ++++++++++++++++++++++++++++++++++
hw/s390x/css.h | 90 +++
hw/s390x/virtio-ccw.c | 904 +++++++++++++++++++++++++
hw/s390x/virtio-ccw.h | 81 +++
linux-headers/asm-generic/kvm_para.h | 5 +
linux-headers/asm-powerpc/kvm.h | 59 ++
linux-headers/asm-powerpc/kvm_para.h | 7 +-
linux-headers/asm-x86/kvm.h | 17 +
linux-headers/linux/kvm.h | 61 +-
target-s390x/Makefile.objs | 2 +-
target-s390x/cpu.h | 232 +++++++
target-s390x/helper.c | 146 ++++
target-s390x/ioinst.c | 737 +++++++++++++++++++++
target-s390x/ioinst.h | 213 ++++++
target-s390x/kvm.c | 251 ++++++-
target-s390x/misc_helper.c | 6 +-
18 files changed, 4204 insertions(+), 100 deletions(-)
create mode 100644 hw/s390x/css.c
create mode 100644 hw/s390x/css.h
create mode 100644 hw/s390x/virtio-ccw.c
create mode 100644 hw/s390x/virtio-ccw.h
create mode 100644 linux-headers/asm-generic/kvm_para.h
create mode 100644 target-s390x/ioinst.c
create mode 100644 target-s390x/ioinst.h
--
1.7.12.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [Qemu-devel] [PATCH 1/3] Update linux headers.
2012-10-31 16:24 [Qemu-devel] [RFC PATCH v3 0/3] s390: channel I/O support in qemu Cornelia Huck
@ 2012-10-31 16:24 ` Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 3/3] s390: Add new channel I/O based virtio transport Cornelia Huck
2 siblings, 0 replies; 8+ messages in thread
From: Cornelia Huck @ 2012-10-31 16:24 UTC (permalink / raw)
To: KVM, linux-s390, qemu-devel
Cc: Carsten Otte, Anthony Liguori, Sebastian Ott, Marcelo Tosatti,
Heiko Carstens, Alexander Graf, Christian Borntraeger, Avi Kivity,
Martin Schwidefsky
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
---
| 5 +++
| 59 ++++++++++++++++++++++++++++++++++
| 7 +++--
| 17 ++++++++++
| 61 ++++++++++++++++++++++++++++++------
5 files changed, 137 insertions(+), 12 deletions(-)
create mode 100644 linux-headers/asm-generic/kvm_para.h
--git a/linux-headers/asm-generic/kvm_para.h b/linux-headers/asm-generic/kvm_para.h
new file mode 100644
index 0000000..63df88b
--- /dev/null
+++ b/linux-headers/asm-generic/kvm_para.h
@@ -0,0 +1,5 @@
+#ifndef _ASM_GENERIC_KVM_PARA_H
+#define _ASM_GENERIC_KVM_PARA_H
+
+
+#endif
--git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
index 1bea4d8..b89ae4d 100644
--- a/linux-headers/asm-powerpc/kvm.h
+++ b/linux-headers/asm-powerpc/kvm.h
@@ -221,6 +221,12 @@ struct kvm_sregs {
__u32 dbsr; /* KVM_SREGS_E_UPDATE_DBSR */
__u32 dbcr[3];
+ /*
+ * iac/dac registers are 64bit wide, while this API
+ * interface provides only lower 32 bits on 64 bit
+ * processors. ONE_REG interface is added for 64bit
+ * iac/dac registers.
+ */
__u32 iac[4];
__u32 dac[2];
__u32 dvc[2];
@@ -326,5 +332,58 @@ struct kvm_book3e_206_tlb_params {
};
#define KVM_REG_PPC_HIOR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x1)
+#define KVM_REG_PPC_IAC1 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x2)
+#define KVM_REG_PPC_IAC2 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x3)
+#define KVM_REG_PPC_IAC3 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x4)
+#define KVM_REG_PPC_IAC4 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x5)
+#define KVM_REG_PPC_DAC1 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x6)
+#define KVM_REG_PPC_DAC2 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x7)
+#define KVM_REG_PPC_DABR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x8)
+#define KVM_REG_PPC_DSCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x9)
+#define KVM_REG_PPC_PURR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xa)
+#define KVM_REG_PPC_SPURR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb)
+#define KVM_REG_PPC_DAR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc)
+#define KVM_REG_PPC_DSISR (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xd)
+#define KVM_REG_PPC_AMR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xe)
+#define KVM_REG_PPC_UAMOR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xf)
+
+#define KVM_REG_PPC_MMCR0 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x10)
+#define KVM_REG_PPC_MMCR1 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x11)
+#define KVM_REG_PPC_MMCRA (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x12)
+
+#define KVM_REG_PPC_PMC1 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x18)
+#define KVM_REG_PPC_PMC2 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x19)
+#define KVM_REG_PPC_PMC3 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x1a)
+#define KVM_REG_PPC_PMC4 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x1b)
+#define KVM_REG_PPC_PMC5 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x1c)
+#define KVM_REG_PPC_PMC6 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x1d)
+#define KVM_REG_PPC_PMC7 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x1e)
+#define KVM_REG_PPC_PMC8 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x1f)
+
+/* 32 floating-point registers */
+#define KVM_REG_PPC_FPR0 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x20)
+#define KVM_REG_PPC_FPR(n) (KVM_REG_PPC_FPR0 + (n))
+#define KVM_REG_PPC_FPR31 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x3f)
+
+/* 32 VMX/Altivec vector registers */
+#define KVM_REG_PPC_VR0 (KVM_REG_PPC | KVM_REG_SIZE_U128 | 0x40)
+#define KVM_REG_PPC_VR(n) (KVM_REG_PPC_VR0 + (n))
+#define KVM_REG_PPC_VR31 (KVM_REG_PPC | KVM_REG_SIZE_U128 | 0x5f)
+
+/* 32 double-width FP registers for VSX */
+/* High-order halves overlap with FP regs */
+#define KVM_REG_PPC_VSR0 (KVM_REG_PPC | KVM_REG_SIZE_U128 | 0x60)
+#define KVM_REG_PPC_VSR(n) (KVM_REG_PPC_VSR0 + (n))
+#define KVM_REG_PPC_VSR31 (KVM_REG_PPC | KVM_REG_SIZE_U128 | 0x7f)
+
+/* FP and vector status/control registers */
+#define KVM_REG_PPC_FPSCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x80)
+#define KVM_REG_PPC_VSCR (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x81)
+
+/* Virtual processor areas */
+/* For SLB & DTL, address in high (first) half, length in low half */
+#define KVM_REG_PPC_VPA_ADDR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x82)
+#define KVM_REG_PPC_VPA_SLB (KVM_REG_PPC | KVM_REG_SIZE_U128 | 0x83)
+#define KVM_REG_PPC_VPA_DTL (KVM_REG_PPC | KVM_REG_SIZE_U128 | 0x84)
#endif /* __LINUX_KVM_POWERPC_H */
--git a/linux-headers/asm-powerpc/kvm_para.h b/linux-headers/asm-powerpc/kvm_para.h
index c047a84..7e64f57 100644
--- a/linux-headers/asm-powerpc/kvm_para.h
+++ b/linux-headers/asm-powerpc/kvm_para.h
@@ -75,9 +75,10 @@ struct kvm_vcpu_arch_shared {
};
#define KVM_SC_MAGIC_R0 0x4b564d21 /* "KVM!" */
-#define HC_VENDOR_KVM (42 << 16)
-#define HC_EV_SUCCESS 0
-#define HC_EV_UNIMPLEMENTED 12
+
+#define KVM_HCALL_TOKEN(num) _EV_HCALL_TOKEN(EV_KVM_VENDOR_ID, num)
+
+#include <asm/epapr_hcalls.h>
#define KVM_FEATURE_MAGIC_PAGE 1
--git a/linux-headers/asm-x86/kvm.h b/linux-headers/asm-x86/kvm.h
index 246617e..a65ec29 100644
--- a/linux-headers/asm-x86/kvm.h
+++ b/linux-headers/asm-x86/kvm.h
@@ -9,6 +9,22 @@
#include <linux/types.h>
#include <linux/ioctl.h>
+#define DE_VECTOR 0
+#define DB_VECTOR 1
+#define BP_VECTOR 3
+#define OF_VECTOR 4
+#define BR_VECTOR 5
+#define UD_VECTOR 6
+#define NM_VECTOR 7
+#define DF_VECTOR 8
+#define TS_VECTOR 10
+#define NP_VECTOR 11
+#define SS_VECTOR 12
+#define GP_VECTOR 13
+#define PF_VECTOR 14
+#define MF_VECTOR 16
+#define MC_VECTOR 18
+
/* Select x86 specific features in <linux/kvm.h> */
#define __KVM_HAVE_PIT
#define __KVM_HAVE_IOAPIC
@@ -25,6 +41,7 @@
#define __KVM_HAVE_DEBUGREGS
#define __KVM_HAVE_XSAVE
#define __KVM_HAVE_XCRS
+#define __KVM_HAVE_READONLY_MEM
/* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256
--git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
index 4b9e575..84e7abc 100644
--- a/linux-headers/linux/kvm.h
+++ b/linux-headers/linux/kvm.h
@@ -101,9 +101,13 @@ struct kvm_userspace_memory_region {
__u64 userspace_addr; /* start of the userspace allocated memory */
};
-/* for kvm_memory_region::flags */
-#define KVM_MEM_LOG_DIRTY_PAGES 1UL
-#define KVM_MEMSLOT_INVALID (1UL << 1)
+/*
+ * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace,
+ * other bits are reserved for kvm internal use which are defined in
+ * include/linux/kvm_host.h.
+ */
+#define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
+#define KVM_MEM_READONLY (1UL << 1)
/* for KVM_IRQ_LINE */
struct kvm_irq_level {
@@ -163,10 +167,16 @@ struct kvm_pit_config {
#define KVM_EXIT_OSI 18
#define KVM_EXIT_PAPR_HCALL 19
#define KVM_EXIT_S390_UCONTROL 20
+#define KVM_EXIT_WATCHDOG 21
+#define KVM_EXIT_S390_TSCH 22
/* For KVM_EXIT_INTERNAL_ERROR */
-#define KVM_INTERNAL_ERROR_EMULATION 1
-#define KVM_INTERNAL_ERROR_SIMUL_EX 2
+/* Emulate instruction failed. */
+#define KVM_INTERNAL_ERROR_EMULATION 1
+/* Encounter unexpected simultaneous exceptions. */
+#define KVM_INTERNAL_ERROR_SIMUL_EX 2
+/* Encounter unexpected vm-exit due to delivery event. */
+#define KVM_INTERNAL_ERROR_DELIVERY_EV 3
/* for KVM_RUN, returned by mmap(vcpu_fd, offset=0) */
struct kvm_run {
@@ -276,6 +286,15 @@ struct kvm_run {
__u64 ret;
__u64 args[9];
} papr_hcall;
+ /* KVM_EXIT_S390_TSCH */
+ struct {
+ __u16 subchannel_id;
+ __u16 subchannel_nr;
+ __u32 io_int_parm;
+ __u32 io_int_word;
+ __u32 ipb;
+ __u8 dequeued;
+ } s390_tsch;
/* Fix the size of the union. */
char padding[256];
};
@@ -388,10 +407,17 @@ struct kvm_s390_psw {
#define KVM_S390_PROGRAM_INT 0xfffe0001u
#define KVM_S390_SIGP_SET_PREFIX 0xfffe0002u
#define KVM_S390_RESTART 0xfffe0003u
+#define KVM_S390_MCHK 0xfffe1000u
#define KVM_S390_INT_VIRTIO 0xffff2603u
#define KVM_S390_INT_SERVICE 0xffff2401u
#define KVM_S390_INT_EMERGENCY 0xffff1201u
#define KVM_S390_INT_EXTERNAL_CALL 0xffff1202u
+#define KVM_S390_INT_IO(ai,cssid,ssid,schid) \
+ (((schid)) | \
+ ((ssid) << 16) | \
+ ((cssid) << 18) | \
+ ((ai) << 26))
+
struct kvm_s390_interrupt {
__u32 type;
@@ -473,6 +499,8 @@ struct kvm_ppc_smmu_info {
struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ];
};
+#define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0)
+
#define KVMIO 0xAE
/* machine type bits, to be used as argument to KVM_CREATE_VM */
@@ -618,6 +646,12 @@ struct kvm_ppc_smmu_info {
#define KVM_CAP_PPC_GET_SMMU_INFO 78
#define KVM_CAP_S390_COW 79
#define KVM_CAP_PPC_ALLOC_HTAB 80
+#ifdef __KVM_HAVE_READONLY_MEM
+#define KVM_CAP_READONLY_MEM 81
+#endif
+#define KVM_CAP_IRQFD_RESAMPLE 82
+#define KVM_CAP_PPC_BOOKE_WATCHDOG 83
+#define KVM_CAP_S390_CSS_SUPPORT 84
#ifdef KVM_CAP_IRQ_ROUTING
@@ -683,12 +717,21 @@ struct kvm_xen_hvm_config {
#endif
#define KVM_IRQFD_FLAG_DEASSIGN (1 << 0)
+/*
+ * Available with KVM_CAP_IRQFD_RESAMPLE
+ *
+ * KVM_IRQFD_FLAG_RESAMPLE indicates resamplefd is valid and specifies
+ * the irqfd to operate in resampling mode for level triggered interrupt
+ * emlation. See Documentation/virtual/kvm/api.txt.
+ */
+#define KVM_IRQFD_FLAG_RESAMPLE (1 << 1)
struct kvm_irqfd {
__u32 fd;
__u32 gsi;
__u32 flags;
- __u8 pad[20];
+ __u32 resamplefd;
+ __u8 pad[16];
};
struct kvm_clock_data {
@@ -831,6 +874,9 @@ struct kvm_s390_ucas_mapping {
#define KVM_PPC_GET_SMMU_INFO _IOR(KVMIO, 0xa6, struct kvm_ppc_smmu_info)
/* Available with KVM_CAP_PPC_ALLOC_HTAB */
#define KVM_PPC_ALLOCATE_HTAB _IOWR(KVMIO, 0xa7, __u32)
+#define KVM_CREATE_SPAPR_TCE _IOW(KVMIO, 0xa8, struct kvm_create_spapr_tce)
+/* Available with KVM_CAP_RMA */
+#define KVM_ALLOCATE_RMA _IOR(KVMIO, 0xa9, struct kvm_allocate_rma)
/*
* ioctls for vcpu fds
@@ -894,9 +940,6 @@ struct kvm_s390_ucas_mapping {
/* Available with KVM_CAP_XCRS */
#define KVM_GET_XCRS _IOR(KVMIO, 0xa6, struct kvm_xcrs)
#define KVM_SET_XCRS _IOW(KVMIO, 0xa7, struct kvm_xcrs)
-#define KVM_CREATE_SPAPR_TCE _IOW(KVMIO, 0xa8, struct kvm_create_spapr_tce)
-/* Available with KVM_CAP_RMA */
-#define KVM_ALLOCATE_RMA _IOR(KVMIO, 0xa9, struct kvm_allocate_rma)
/* Available with KVM_CAP_SW_TLB */
#define KVM_DIRTY_TLB _IOW(KVMIO, 0xaa, struct kvm_dirty_tlb)
/* Available with KVM_CAP_ONE_REG */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support.
2012-10-31 16:24 [Qemu-devel] [RFC PATCH v3 0/3] s390: channel I/O support in qemu Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 1/3] Update linux headers Cornelia Huck
@ 2012-10-31 16:24 ` Cornelia Huck
2012-11-13 1:17 ` Marcelo Tosatti
2012-11-19 13:30 ` Alexander Graf
2012-10-31 16:24 ` [Qemu-devel] [PATCH 3/3] s390: Add new channel I/O based virtio transport Cornelia Huck
2 siblings, 2 replies; 8+ messages in thread
From: Cornelia Huck @ 2012-10-31 16:24 UTC (permalink / raw)
To: KVM, linux-s390, qemu-devel
Cc: Carsten Otte, Anthony Liguori, Sebastian Ott, Marcelo Tosatti,
Heiko Carstens, Alexander Graf, Christian Borntraeger, Avi Kivity,
Martin Schwidefsky
Provide a mechanism for qemu to provide fully virtual subchannels to
the guest. In the KVM case, this relies on the kernel's css support
for I/O and machine check interrupt handling. The !KVM case handles
interrupts on its own.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
---
hw/s390x/Makefile.objs | 1 +
hw/s390x/css.c | 1209 ++++++++++++++++++++++++++++++++++++++++++++
hw/s390x/css.h | 90 ++++
target-s390x/Makefile.objs | 2 +-
target-s390x/cpu.h | 232 +++++++++
target-s390x/helper.c | 146 ++++++
target-s390x/ioinst.c | 737 +++++++++++++++++++++++++++
target-s390x/ioinst.h | 213 ++++++++
target-s390x/kvm.c | 251 ++++++++-
target-s390x/misc_helper.c | 6 +-
10 files changed, 2872 insertions(+), 15 deletions(-)
create mode 100644 hw/s390x/css.c
create mode 100644 hw/s390x/css.h
create mode 100644 target-s390x/ioinst.c
create mode 100644 target-s390x/ioinst.h
diff --git a/hw/s390x/Makefile.objs b/hw/s390x/Makefile.objs
index 096dfcd..378b099 100644
--- a/hw/s390x/Makefile.objs
+++ b/hw/s390x/Makefile.objs
@@ -4,3 +4,4 @@ obj-y := $(addprefix ../,$(obj-y))
obj-y += sclp.o
obj-y += event-facility.o
obj-y += sclpquiesce.o sclpconsole.o
+obj-y += css.o
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
new file mode 100644
index 0000000..9adffb3
--- /dev/null
+++ b/hw/s390x/css.c
@@ -0,0 +1,1209 @@
+/*
+ * Channel subsystem base support.
+ *
+ * Copyright 2012 IBM Corp.
+ * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#include "qemu-thread.h"
+#include "qemu-queue.h"
+#include <hw/qdev.h>
+#include "bitops.h"
+#include "kvm.h"
+#include "cpu.h"
+#include "ioinst.h"
+#include "css.h"
+#include "virtio-ccw.h"
+
+typedef struct CrwContainer {
+ CRW crw;
+ QTAILQ_ENTRY(CrwContainer) sibling;
+} CrwContainer;
+
+typedef struct ChpInfo {
+ uint8_t in_use;
+ uint8_t type;
+ uint8_t is_virtual;
+} ChpInfo;
+
+typedef struct SubchSet {
+ SubchDev *sch[MAX_SCHID + 1];
+ unsigned long schids_used[BITS_TO_LONGS(MAX_SCHID + 1)];
+ unsigned long devnos_used[BITS_TO_LONGS(MAX_SCHID + 1)];
+} SubchSet;
+
+typedef struct CssImage {
+ SubchSet *sch_set[MAX_SSID + 1];
+ ChpInfo chpids[MAX_CHPID + 1];
+} CssImage;
+
+typedef struct ChannelSubSys {
+ QTAILQ_HEAD(, CrwContainer) pending_crws;
+ bool do_crw_mchk;
+ bool crws_lost;
+ uint8_t max_cssid;
+ uint8_t max_ssid;
+ bool chnmon_active;
+ uint64_t chnmon_area;
+ CssImage *css[MAX_CSSID + 1];
+ uint8_t default_cssid;
+} ChannelSubSys;
+
+static ChannelSubSys *channel_subsys;
+
+int css_create_css_image(uint8_t cssid, bool default_image)
+{
+ if (cssid > MAX_CSSID) {
+ return -EINVAL;
+ }
+ if (channel_subsys->css[cssid]) {
+ return -EBUSY;
+ }
+ channel_subsys->css[cssid] = g_try_malloc0(sizeof(CssImage));
+ if (!channel_subsys->css[cssid]) {
+ return -ENOMEM;
+ }
+ if (default_image) {
+ channel_subsys->default_cssid = cssid;
+ }
+ return 0;
+}
+
+static void css_write_phys_pmcw(uint64_t addr, PMCW *pmcw)
+{
+ int i;
+ uint32_t offset = 0;
+ struct copy_pmcw {
+ uint32_t intparm;
+ uint16_t flags;
+ uint16_t devno;
+ uint8_t lpm;
+ uint8_t pnom;
+ uint8_t lpum;
+ uint8_t pim;
+ uint16_t mbi;
+ uint8_t pom;
+ uint8_t pam;
+ uint8_t chpid[8];
+ uint32_t chars;
+ } *copy;
+
+ copy = (struct copy_pmcw *)pmcw;
+ stl_phys(addr + offset, copy->intparm);
+ offset += sizeof(copy->intparm);
+ stw_phys(addr + offset, copy->flags);
+ offset += sizeof(copy->flags);
+ stw_phys(addr + offset, copy->devno);
+ offset += sizeof(copy->devno);
+ stb_phys(addr + offset, copy->lpm);
+ offset += sizeof(copy->lpm);
+ stb_phys(addr + offset, copy->pnom);
+ offset += sizeof(copy->pnom);
+ stb_phys(addr + offset, copy->lpum);
+ offset += sizeof(copy->lpum);
+ stb_phys(addr + offset, copy->pim);
+ offset += sizeof(copy->pim);
+ stw_phys(addr + offset, copy->mbi);
+ offset += sizeof(copy->mbi);
+ stb_phys(addr + offset, copy->pom);
+ offset += sizeof(copy->pom);
+ stb_phys(addr + offset, copy->pam);
+ offset += sizeof(copy->pam);
+ for (i = 0; i < 8; i++) {
+ stb_phys(addr + offset, copy->chpid[i]);
+ offset += sizeof(copy->chpid[i]);
+ }
+ stl_phys(addr + offset, copy->chars);
+}
+
+static void css_write_phys_scsw(uint64_t addr, SCSW *scsw)
+{
+ uint32_t offset = 0;
+ struct copy_scsw {
+ uint32_t flags;
+ uint32_t cpa;
+ uint8_t dstat;
+ uint8_t cstat;
+ uint16_t count;
+ } *copy;
+
+ copy = (struct copy_scsw *)scsw;
+ stl_phys(addr + offset, copy->flags);
+ offset += sizeof(copy->flags);
+ stl_phys(addr + offset, copy->cpa);
+ offset += sizeof(copy->cpa);
+ stb_phys(addr + offset, copy->dstat);
+ offset += sizeof(copy->dstat);
+ stb_phys(addr + offset, copy->cstat);
+ offset += sizeof(copy->cstat);
+ stw_phys(addr + offset, copy->count);
+}
+
+static void css_inject_io_interrupt(SubchDev *sch)
+{
+ S390CPU *cpu = s390_cpu_addr2state(0);
+
+ s390_io_interrupt(&cpu->env,
+ channel_subsys->max_cssid > 0 ?
+ (sch->cssid << 8) | (1 << 3) | (sch->ssid << 1) | 1 :
+ (sch->ssid << 1) | 1,
+ sch->schid,
+ sch->curr_status.pmcw.intparm,
+ (0x80 >> sch->curr_status.pmcw.isc) << 24);
+}
+
+void css_conditional_io_interrupt(SubchDev *sch)
+{
+ /*
+ * If the subchannel is not currently status pending, make it pending
+ * with alert status.
+ */
+ if (sch && !(sch->curr_status.scsw.stctl & SCSW_STCTL_STATUS_PEND)) {
+ S390CPU *cpu = s390_cpu_addr2state(0);
+
+ sch->curr_status.scsw.stctl =
+ SCSW_STCTL_ALERT | SCSW_STCTL_STATUS_PEND;
+ /* Inject an I/O interrupt. */
+ s390_io_interrupt(&cpu->env,
+ channel_subsys->max_cssid > 0 ?
+ (sch->cssid << 8) | (1 << 3) | (sch->ssid << 1) | 1 :
+ (sch->ssid << 1) | 1,
+ sch->schid,
+ sch->curr_status.pmcw.intparm,
+ (0x80 >> sch->curr_status.pmcw.isc) << 24);
+ }
+}
+
+static void sch_handle_clear_func(SubchDev *sch)
+{
+ PMCW *p = &sch->curr_status.pmcw;
+ SCSW *s = &sch->curr_status.scsw;
+ int path;
+
+ /* Path management: In our simple css, we always choose the only path. */
+ path = 0x80;
+
+ /* Reset values prior to 'issueing the clear signal'. */
+ p->lpum = 0;
+ p->pom = 0xff;
+ s->pno = 0;
+
+ /* We always 'attempt to issue the clear signal', and we always succeed. */
+ sch->orb = NULL;
+ sch->channel_prog = NULL;
+ sch->last_cmd = NULL;
+ s->actl &= ~SCSW_ACTL_CLEAR_PEND;
+ s->stctl |= SCSW_STCTL_STATUS_PEND;
+
+ s->dstat = 0;
+ s->cstat = 0;
+ p->lpum = path;
+
+}
+
+static void sch_handle_halt_func(SubchDev *sch)
+{
+
+ PMCW *p = &sch->curr_status.pmcw;
+ SCSW *s = &sch->curr_status.scsw;
+ int path;
+
+ /* Path management: In our simple css, we always choose the only path. */
+ path = 0x80;
+
+ /* We always 'attempt to issue the halt signal', and we always succeed. */
+ sch->orb = NULL;
+ sch->channel_prog = NULL;
+ sch->last_cmd = NULL;
+ s->actl &= ~SCSW_ACTL_HALT_PEND;
+ s->stctl |= SCSW_STCTL_STATUS_PEND;
+
+ if ((s->actl & (SCSW_ACTL_SUBCH_ACTIVE | SCSW_ACTL_DEVICE_ACTIVE)) ||
+ !((s->actl & SCSW_ACTL_START_PEND) ||
+ (s->actl & SCSW_ACTL_SUSP))) {
+ s->dstat = SCSW_DSTAT_DEVICE_END;
+ }
+ s->cstat = 0;
+ p->lpum = path;
+
+}
+
+static int css_interpret_ccw(SubchDev *sch, CCW1 *ccw)
+{
+ int ret;
+ bool check_len;
+ int len;
+ int i;
+
+ if (!ccw) {
+ return -EIO;
+ }
+
+ /* Check for invalid command codes. */
+ if ((ccw->cmd_code & 0x0f) == 0) {
+ return -EINVAL;
+ }
+ if (((ccw->cmd_code & 0x0f) == CCW_CMD_TIC) &&
+ ((ccw->cmd_code & 0xf0) != 0)) {
+ return -EINVAL;
+ }
+
+ if (ccw->flags & CCW_FLAG_SUSPEND) {
+ return -ERESTART;
+ }
+
+ check_len = !((ccw->flags & CCW_FLAG_SLI) && !(ccw->flags & CCW_FLAG_DC));
+
+ /* Look at the command. */
+ switch (ccw->cmd_code) {
+ case CCW_CMD_NOOP:
+ /* Nothing to do. */
+ ret = 0;
+ break;
+ case CCW_CMD_BASIC_SENSE:
+ if (check_len) {
+ if (ccw->count != sizeof(sch->sense_data)) {
+ ret = -EINVAL;
+ break;
+ }
+ }
+ len = MIN(ccw->count, sizeof(sch->sense_data));
+ cpu_physical_memory_write(ccw->cda, sch->sense_data, len);
+ sch->curr_status.scsw.count = ccw->count - len;
+ memset(sch->sense_data, 0, sizeof(sch->sense_data));
+ ret = 0;
+ break;
+ case CCW_CMD_SENSE_ID:
+ {
+ uint8_t sense_bytes[256];
+
+ /* Sense ID information is device specific. */
+ memcpy(sense_bytes, &sch->id, sizeof(sense_bytes));
+ if (check_len) {
+ if (ccw->count != sizeof(sense_bytes)) {
+ ret = -EINVAL;
+ break;
+ }
+ }
+ len = MIN(ccw->count, sizeof(sense_bytes));
+ /*
+ * Only indicate 0xff in the first sense byte if we actually
+ * have enough place to store at least bytes 0-3.
+ */
+ if (len >= 4) {
+ stb_phys(ccw->cda, 0xff);
+ } else {
+ stb_phys(ccw->cda, 0);
+ }
+ i = 1;
+ for (i = 1; i < len - 1; i++) {
+ stb_phys(ccw->cda + i, sense_bytes[i]);
+ }
+ sch->curr_status.scsw.count = ccw->count - len;
+ ret = 0;
+ break;
+ }
+ case CCW_CMD_TIC:
+ if (sch->last_cmd->cmd_code == CCW_CMD_TIC) {
+ ret = -EINVAL;
+ break;
+ }
+ if (ccw->flags & (CCW_FLAG_CC | CCW_FLAG_DC)) {
+ ret = -EINVAL;
+ break;
+ }
+ sch->channel_prog = qemu_get_ram_ptr(ccw->cda);
+ ret = sch->channel_prog ? -EAGAIN : -EFAULT;
+ break;
+ default:
+ if (sch->ccw_cb) {
+ /* Handle device specific commands. */
+ ret = sch->ccw_cb(sch, ccw);
+ } else {
+ ret = -EOPNOTSUPP;
+ }
+ break;
+ }
+ sch->last_cmd = ccw;
+ if (ret == 0) {
+ if (ccw->flags & CCW_FLAG_CC) {
+ sch->channel_prog += 8;
+ ret = -EAGAIN;
+ }
+ }
+
+ return ret;
+}
+
+static void sch_handle_start_func(SubchDev *sch)
+{
+
+ PMCW *p = &sch->curr_status.pmcw;
+ SCSW *s = &sch->curr_status.scsw;
+ ORB *orb = sch->orb;
+ int path;
+ int ret;
+
+ /* Path management: In our simple css, we always choose the only path. */
+ path = 0x80;
+
+ if (!s->actl & SCSW_ACTL_SUSP) {
+ /* Look at the orb and try to execute the channel program. */
+ p->intparm = orb->intparm;
+ if (!(orb->lpm & path)) {
+ /* Generate a deferred cc 3 condition. */
+ s->cc = 3;
+ s->stctl = (SCSW_STCTL_ALERT | SCSW_STCTL_STATUS_PEND);
+ return;
+ }
+ } else {
+ s->actl &= ~(SCSW_ACTL_SUSP | SCSW_ACTL_RESUME_PEND);
+ }
+ sch->last_cmd = NULL;
+ do {
+ ret = css_interpret_ccw(sch, sch->channel_prog);
+ switch (ret) {
+ case -EAGAIN:
+ /* ccw chain, continue processing */
+ break;
+ case 0:
+ /* success */
+ s->actl &= ~SCSW_ACTL_START_PEND;
+ s->stctl = SCSW_STCTL_PRIMARY | SCSW_STCTL_SECONDARY |
+ SCSW_STCTL_STATUS_PEND;
+ s->dstat = SCSW_DSTAT_CHANNEL_END | SCSW_DSTAT_DEVICE_END;
+ break;
+ case -EOPNOTSUPP:
+ /* unsupported command, generate unit check (command reject) */
+ s->actl &= ~SCSW_ACTL_START_PEND;
+ s->dstat = SCSW_DSTAT_UNIT_CHECK;
+ /* Set sense bit 0 in ecw0. */
+ sch->sense_data[0] = 0x80;
+ s->stctl = SCSW_STCTL_PRIMARY | SCSW_STCTL_SECONDARY |
+ SCSW_STCTL_ALERT | SCSW_STCTL_STATUS_PEND;
+ break;
+ case -EFAULT:
+ /* memory problem, generate channel data check */
+ s->actl &= ~SCSW_ACTL_START_PEND;
+ s->cstat = SCSW_CSTAT_DATA_CHECK;
+ s->stctl = SCSW_STCTL_PRIMARY | SCSW_STCTL_SECONDARY |
+ SCSW_STCTL_ALERT | SCSW_STCTL_STATUS_PEND;
+ break;
+ case -EBUSY:
+ /* subchannel busy, generate deferred cc 1 */
+ s->cc = 1;
+ s->stctl = SCSW_STCTL_ALERT | SCSW_STCTL_STATUS_PEND;
+ break;
+ case -ERESTART:
+ /* channel program has been suspended */
+ s->actl &= ~SCSW_ACTL_START_PEND;
+ s->actl |= SCSW_ACTL_SUSP;
+ break;
+ default:
+ /* error, generate channel program check */
+ s->actl &= ~SCSW_ACTL_START_PEND;
+ s->cstat = SCSW_CSTAT_PROG_CHECK;
+ s->stctl = SCSW_STCTL_PRIMARY | SCSW_STCTL_SECONDARY |
+ SCSW_STCTL_ALERT | SCSW_STCTL_STATUS_PEND;
+ break;
+ }
+ } while (ret == -EAGAIN);
+
+}
+
+/*
+ * On real machines, this would run asynchronously to the main vcpus.
+ * We might want to make some parts of the ssch handling (interpreting
+ * read/writes) asynchronous later on if we start supporting more than
+ * our current very simple devices.
+ */
+static void do_subchannel_work(SubchDev *sch)
+{
+
+ SCSW *s = &sch->curr_status.scsw;
+
+ if (s->fctl & SCSW_FCTL_CLEAR_FUNC) {
+ sch_handle_clear_func(sch);
+ } else if (s->fctl & SCSW_FCTL_HALT_FUNC) {
+ sch_handle_halt_func(sch);
+ } else if (s->fctl & SCSW_FCTL_START_FUNC) {
+ sch_handle_start_func(sch);
+ } else {
+ /* Cannot happen. */
+ return;
+ }
+ css_inject_io_interrupt(sch);
+}
+
+int css_do_stsch(SubchDev *sch, uint64_t addr)
+{
+ int i;
+ uint32_t offset = 0;
+
+ /* Use current status. */
+ css_write_phys_pmcw(addr, &sch->curr_status.pmcw);
+ offset += sizeof(PMCW);
+ css_write_phys_scsw(addr + offset, &sch->curr_status.scsw);
+ offset += sizeof(SCSW);
+ stq_phys(addr + offset, sch->curr_status.mba);
+ offset += sizeof(sch->curr_status.mba);
+ for (i = 0; i < 4; i++) {
+ stb_phys(addr + offset, sch->curr_status.mda[i]);
+ offset += sizeof(sch->curr_status.mda[i]);
+ }
+ return 0;
+}
+
+int css_do_msch(SubchDev *sch, SCHIB *schib)
+{
+ SCSW *s = &sch->curr_status.scsw;
+ PMCW *p = &sch->curr_status.pmcw;
+ int ret;
+
+ if (!sch->curr_status.pmcw.dnv) {
+ ret = 0;
+ goto out;
+ }
+
+ if (s->stctl & SCSW_STCTL_STATUS_PEND) {
+ ret = -EINPROGRESS;
+ goto out;
+ }
+
+ if (s->fctl &
+ (SCSW_FCTL_START_FUNC|SCSW_FCTL_HALT_FUNC|SCSW_FCTL_CLEAR_FUNC)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ /* Only update the program-modifiable fields. */
+ p->ena = schib->pmcw.ena;
+ p->intparm = schib->pmcw.intparm;
+ p->isc = schib->pmcw.isc;
+ p->mp = schib->pmcw.mp;
+ p->lpm = schib->pmcw.lpm;
+ p->pom = schib->pmcw.pom;
+ p->lm = schib->pmcw.lm;
+ p->csense = schib->pmcw.csense;
+
+ p->mme = schib->pmcw.mme;
+ p->mbi = schib->pmcw.mbi;
+ p->mbfc = schib->pmcw.mbfc;
+ sch->curr_status.mba = schib->mba;
+
+ ret = 0;
+
+out:
+ return ret;
+}
+
+int css_do_xsch(SubchDev *sch)
+{
+ SCSW *s = &sch->curr_status.scsw;
+ PMCW *p = &sch->curr_status.pmcw;
+ int ret;
+
+ if (!p->dnv || !p->ena) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if (!s->fctl || (s->fctl != SCSW_FCTL_START_FUNC) ||
+ (!(s->actl &
+ (SCSW_ACTL_RESUME_PEND | SCSW_ACTL_START_PEND | SCSW_ACTL_SUSP))) ||
+ (s->actl & SCSW_ACTL_SUBCH_ACTIVE)) {
+ ret = -EINPROGRESS;
+ goto out;
+ }
+
+ if (s->stctl != 0) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ /* Cancel the current operation. */
+ s->fctl &= ~SCSW_FCTL_START_FUNC;
+ s->actl &= ~(SCSW_ACTL_RESUME_PEND|SCSW_ACTL_START_PEND|SCSW_ACTL_SUSP);
+ sch->channel_prog = NULL;
+ sch->last_cmd = NULL;
+ sch->orb = NULL;
+ s->dstat = 0;
+ s->cstat = 0;
+ ret = 0;
+
+out:
+ return ret;
+}
+
+int css_do_csch(SubchDev *sch)
+{
+ SCSW *s = &sch->curr_status.scsw;
+ PMCW *p = &sch->curr_status.pmcw;
+ int ret;
+
+ if (!p->dnv || !p->ena) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ /* Trigger the clear function. */
+ s->fctl = SCSW_FCTL_CLEAR_FUNC;
+ s->actl = SCSW_ACTL_CLEAR_PEND;
+
+ do_subchannel_work(sch);
+ ret = 0;
+
+out:
+ return ret;
+}
+
+int css_do_hsch(SubchDev *sch)
+{
+ SCSW *s = &sch->curr_status.scsw;
+ PMCW *p = &sch->curr_status.pmcw;
+ int ret;
+
+ if (!p->dnv || !p->ena) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if ((s->stctl == SCSW_STCTL_STATUS_PEND) ||
+ (s->stctl & (SCSW_STCTL_PRIMARY |
+ SCSW_STCTL_SECONDARY |
+ SCSW_STCTL_ALERT))) {
+ ret = -EINPROGRESS;
+ goto out;
+ }
+
+ if (s->fctl & (SCSW_FCTL_HALT_FUNC | SCSW_FCTL_CLEAR_FUNC)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ /* Trigger the halt function. */
+ s->fctl |= SCSW_FCTL_HALT_FUNC;
+ s->fctl &= ~SCSW_FCTL_START_FUNC;
+ if ((s->actl == (SCSW_ACTL_SUBCH_ACTIVE | SCSW_ACTL_DEVICE_ACTIVE)) &&
+ (s->stctl == SCSW_STCTL_INTERMEDIATE)) {
+ s->stctl &= ~SCSW_STCTL_STATUS_PEND;
+ }
+ s->actl |= SCSW_ACTL_HALT_PEND;
+
+ do_subchannel_work(sch);
+ ret = 0;
+
+out:
+ return ret;
+}
+
+static void css_update_chnmon(SubchDev *sch)
+{
+ if (!sch->curr_status.pmcw.mme) {
+ /* Not active. */
+ return;
+ }
+ if (sch->curr_status.pmcw.mbfc) {
+ /* Format 1, per-subchannel area. */
+ struct cmbe *cmbe;
+
+ cmbe = qemu_get_ram_ptr(sch->curr_status.mba);
+ if (cmbe) {
+ cmbe->ssch_rsch_count++;
+ }
+ } else {
+ /* Format 0, global area. */
+ struct cmb *cmb;
+ uint32_t offset;
+
+ offset = sch->curr_status.pmcw.mbi << 5;
+ cmb = qemu_get_ram_ptr(channel_subsys->chnmon_area + offset);
+ if (cmb) {
+ cmb->ssch_rsch_count++;
+ }
+ }
+}
+
+int css_do_ssch(SubchDev *sch, ORB *orb)
+{
+ SCSW *s = &sch->curr_status.scsw;
+ PMCW *p = &sch->curr_status.pmcw;
+ int ret;
+
+ if (!p->dnv || !p->ena) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if (s->stctl & SCSW_STCTL_STATUS_PEND) {
+ ret = -EINPROGRESS;
+ goto out;
+ }
+
+ if (s->fctl & (SCSW_FCTL_START_FUNC |
+ SCSW_FCTL_HALT_FUNC |
+ SCSW_FCTL_CLEAR_FUNC)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ /* If monitoring is active, update counter. */
+ if (channel_subsys->chnmon_active) {
+ css_update_chnmon(sch);
+ }
+ sch->orb = orb;
+ sch->channel_prog = qemu_get_ram_ptr(orb->cpa);
+ /* Trigger the start function. */
+ s->fctl |= SCSW_FCTL_START_FUNC;
+ s->actl |= SCSW_ACTL_START_PEND;
+ s->pno = 0;
+
+ do_subchannel_work(sch);
+ ret = 0;
+
+out:
+ return ret;
+}
+
+int css_do_tsch(SubchDev *sch, uint64_t addr)
+{
+ SCSW *s = &sch->curr_status.scsw;
+ PMCW *p = &sch->curr_status.pmcw;
+ uint8_t stctl;
+ uint8_t fctl;
+ uint8_t actl;
+ IRB irb;
+ int ret;
+ int i;
+ uint32_t offset = 0;
+
+ if (!p->dnv || !p->ena) {
+ ret = 3;
+ goto out;
+ }
+
+ stctl = s->stctl;
+ fctl = s->fctl;
+ actl = s->actl;
+
+ /* Prepare the irb for the guest. */
+ memset(&irb, 0, sizeof(IRB));
+
+ /* Copy scsw from current status. */
+ memcpy(&irb.scsw, s, sizeof(SCSW));
+ if (stctl & SCSW_STCTL_STATUS_PEND) {
+ if (s->cstat & (SCSW_CSTAT_DATA_CHECK |
+ SCSW_CSTAT_CHN_CTRL_CHK |
+ SCSW_CSTAT_INTF_CTRL_CHK)) {
+ irb.scsw.eswf = 1;
+ irb.esw[0] = 0x04804000;
+ } else {
+ irb.esw[0] = 0x00800000;
+ }
+ /* If a unit check is pending, copy sense data. */
+ if ((s->dstat & SCSW_DSTAT_UNIT_CHECK) && p->csense) {
+ irb.scsw.eswf = 1;
+ irb.scsw.ectl = 1;
+ memcpy(irb.ecw, sch->sense_data, sizeof(sch->sense_data));
+ irb.esw[1] = 0x02000000 | (sizeof(sch->sense_data) << 8);
+ }
+ }
+ /* Store the irb to the guest. */
+ css_write_phys_scsw(addr + offset, &irb.scsw);
+ offset += sizeof(SCSW);
+ for (i = 0; i < 5; i++) {
+ stl_phys(addr + offset, irb.esw[i]);
+ offset += sizeof(irb.esw[i]);
+ }
+ for (i = 0; i < 8; i++) {
+ stl_phys(addr + offset, irb.ecw[i]);
+ offset += sizeof(irb.ecw[i]);
+ }
+ for (i = 0; i < 8; i++) {
+ stl_phys(addr + offset, irb.emw[i]);
+ offset += sizeof(irb.emw[i]);
+ }
+
+ /* Clear conditions on subchannel, if applicable. */
+ if (stctl & SCSW_STCTL_STATUS_PEND) {
+ s->stctl = 0;
+ if ((stctl != (SCSW_STCTL_INTERMEDIATE | SCSW_STCTL_STATUS_PEND)) ||
+ ((fctl & SCSW_FCTL_HALT_FUNC) &&
+ (actl & SCSW_ACTL_SUSP))) {
+ s->fctl = 0;
+ }
+ if (stctl != (SCSW_STCTL_INTERMEDIATE | SCSW_STCTL_STATUS_PEND)) {
+ s->pno = 0;
+ s->actl &= ~(SCSW_ACTL_RESUME_PEND |
+ SCSW_ACTL_START_PEND |
+ SCSW_ACTL_HALT_PEND |
+ SCSW_ACTL_CLEAR_PEND |
+ SCSW_ACTL_SUSP);
+ } else {
+ if ((actl & SCSW_ACTL_SUSP) &&
+ (fctl & SCSW_FCTL_START_FUNC)) {
+ s->pno = 0;
+ if (fctl & SCSW_FCTL_HALT_FUNC) {
+ s->actl &= ~(SCSW_ACTL_RESUME_PEND |
+ SCSW_ACTL_START_PEND |
+ SCSW_ACTL_HALT_PEND |
+ SCSW_ACTL_CLEAR_PEND |
+ SCSW_ACTL_SUSP);
+ } else {
+ s->actl &= ~SCSW_ACTL_RESUME_PEND;
+ }
+ }
+ }
+ /* Clear pending sense data. */
+ if (p->csense) {
+ memset(sch->sense_data, 0 , sizeof(sch->sense_data));
+ }
+ }
+
+ ret = ((stctl & SCSW_STCTL_STATUS_PEND) == 0);
+
+out:
+ return ret;
+}
+
+int css_do_stcrw(uint64_t addr)
+{
+ CrwContainer *crw_cont;
+ int ret;
+
+ crw_cont = QTAILQ_FIRST(&channel_subsys->pending_crws);
+ if (crw_cont) {
+ QTAILQ_REMOVE(&channel_subsys->pending_crws, crw_cont, sibling);
+ stl_phys(addr, *(uint32_t *)&crw_cont->crw);
+ g_free(crw_cont);
+ ret = 0;
+ } else {
+ /* List was empty, turn crw machine checks on again. */
+ stl_phys(addr, 0);
+ channel_subsys->do_crw_mchk = true;
+ ret = 1;
+ }
+
+ return ret;
+}
+
+int css_do_tpi(uint64_t addr, int lowcore)
+{
+ /* No pending interrupts for !KVM. */
+ return 0;
+ }
+
+int css_collect_chp_desc(int m, uint8_t cssid, uint8_t f_chpid, uint8_t l_chpid,
+ int rfmt, void *buf)
+{
+ int i, desc_size;
+ uint32_t words[8];
+ CssImage *css;
+
+ if (!m && !cssid) {
+ css = channel_subsys->css[channel_subsys->default_cssid];
+ } else {
+ css = channel_subsys->css[cssid];
+ }
+ if (!css) {
+ return 0;
+ }
+ desc_size = 0;
+ for (i = f_chpid; i <= l_chpid; i++) {
+ if (css->chpids[i].in_use) {
+ if (rfmt == 0) {
+ words[0] = 0x80000000 | (css->chpids[i].type << 8) | i;
+ words[1] = 0;
+ memcpy(buf + desc_size, words, 8);
+ desc_size += 8;
+ } else if (rfmt == 1) {
+ words[0] = 0x80000000 | (css->chpids[i].type << 8) | i;
+ words[1] = 0;
+ words[2] = 0;
+ words[3] = 0;
+ words[4] = 0;
+ words[5] = 0;
+ words[6] = 0;
+ words[7] = 0;
+ memcpy(buf + desc_size, words, 32);
+ desc_size += 32;
+ }
+ }
+ }
+ return desc_size;
+}
+
+void css_do_schm(uint8_t mbk, int update, int dct, uint64_t mbo)
+{
+ /* dct is currently ignored (not really meaningful for our devices) */
+ /* TODO: Don't ignore mbk. */
+ if (update && !channel_subsys->chnmon_active) {
+ /* Enable measuring. */
+ channel_subsys->chnmon_area = mbo;
+ channel_subsys->chnmon_active = true;
+ }
+ if (!update && channel_subsys->chnmon_active) {
+ /* Disable measuring. */
+ channel_subsys->chnmon_area = 0;
+ channel_subsys->chnmon_active = false;
+ }
+}
+
+int css_do_rsch(SubchDev *sch)
+{
+ SCSW *s = &sch->curr_status.scsw;
+ PMCW *p = &sch->curr_status.pmcw;
+ int ret;
+
+ if (!p->dnv || !p->ena) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if (s->stctl & SCSW_STCTL_STATUS_PEND) {
+ ret = -EINPROGRESS;
+ goto out;
+ }
+
+ if ((s->fctl != SCSW_FCTL_START_FUNC) ||
+ (s->actl & SCSW_ACTL_RESUME_PEND) ||
+ (!(s->actl & SCSW_ACTL_SUSP))) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* If monitoring is active, update counter. */
+ if (channel_subsys->chnmon_active) {
+ css_update_chnmon(sch);
+ }
+
+ s->actl |= SCSW_ACTL_RESUME_PEND;
+ do_subchannel_work(sch);
+ ret = 0;
+
+out:
+ return ret;
+}
+
+int css_do_rchp(uint8_t cssid, uint8_t chpid)
+{
+ uint8_t real_cssid;
+
+ if (cssid > channel_subsys->max_cssid) {
+ return -EINVAL;
+ }
+ if (channel_subsys->max_cssid == 0) {
+ real_cssid = channel_subsys->default_cssid;
+ } else {
+ real_cssid = cssid;
+ }
+ if (!channel_subsys->css[real_cssid]) {
+ return -EINVAL;
+ }
+
+ if (!channel_subsys->css[real_cssid]->chpids[chpid].in_use) {
+ return -ENODEV;
+ }
+
+ if (!channel_subsys->css[real_cssid]->chpids[chpid].is_virtual) {
+ fprintf(stderr,
+ "rchp unsupported for non-virtual chpid %x.%02x!\n",
+ real_cssid, chpid);
+ return -ENODEV;
+ }
+
+ /* We don't really use a channel path, so we're done here. */
+ css_queue_crw(CRW_RSC_CHP, CRW_ERC_INIT,
+ channel_subsys->max_cssid > 0 ? 1 : 0, chpid);
+ if (channel_subsys->max_cssid > 0) {
+ css_queue_crw(CRW_RSC_CHP, CRW_ERC_INIT, 0, real_cssid << 8);
+ }
+ return 0;
+}
+
+bool css_schid_final(uint8_t cssid, uint8_t ssid, uint16_t schid)
+{
+ SubchSet *set;
+
+ if (cssid > MAX_CSSID || ssid > MAX_SSID || !channel_subsys->css[cssid] ||
+ !channel_subsys->css[cssid]->sch_set[ssid]) {
+ return true;
+ }
+ set = channel_subsys->css[cssid]->sch_set[ssid];
+ return schid > find_last_bit(set->schids_used,
+ (MAX_SCHID + 1) / sizeof(unsigned long));
+}
+
+static int css_add_virtual_chpid(uint8_t cssid, uint8_t chpid, uint8_t type)
+{
+ CssImage *css;
+
+ if (cssid > MAX_CSSID) {
+ return -EINVAL;
+ }
+ css = channel_subsys->css[cssid];
+ if (!css) {
+ return -EINVAL;
+ }
+ if (css->chpids[chpid].in_use) {
+ return -EEXIST;
+ }
+ css->chpids[chpid].in_use = 1;
+ css->chpids[chpid].type = type;
+ css->chpids[chpid].is_virtual = 1;
+
+ css_generate_chp_crws(cssid, chpid);
+
+ return 0;
+}
+
+void css_sch_build_virtual_schib(SubchDev *sch, uint8_t chpid, uint8_t type)
+{
+ PMCW *p = &sch->curr_status.pmcw;
+ SCSW *s = &sch->curr_status.scsw;
+ int i;
+ CssImage *css = channel_subsys->css[sch->cssid];
+
+ assert(css != NULL);
+ memset(p, 0, sizeof(PMCW));
+ p->dnv = 1;
+ p->dev = sch->devno;
+ /* single path */
+ p->pim = 0x80;
+ p->pom = 0xff;
+ p->pam = 0x80;
+ p->chpid[0] = chpid;
+ if (!css->chpids[chpid].in_use) {
+ css_add_virtual_chpid(sch->cssid, chpid, type);
+ }
+
+ memset(s, 0, sizeof(SCSW));
+ sch->curr_status.mba = 0;
+ for (i = 0; i < 4; i++) {
+ sch->curr_status.mda[i] = 0;
+ }
+}
+
+SubchDev *css_find_subch(uint8_t m, uint8_t cssid, uint8_t ssid, uint16_t schid)
+{
+ uint8_t real_cssid;
+
+ real_cssid = (!m && (cssid == 0)) ? channel_subsys->default_cssid : cssid;
+
+ if (!channel_subsys->css[real_cssid]) {
+ return NULL;
+ }
+
+ if (!channel_subsys->css[real_cssid]->sch_set[ssid]) {
+ return NULL;
+ }
+
+ return channel_subsys->css[real_cssid]->sch_set[ssid]->sch[schid];
+}
+
+bool css_subch_visible(SubchDev *sch)
+{
+ if (sch->ssid > channel_subsys->max_ssid) {
+ return false;
+ }
+
+ if (sch->cssid != channel_subsys->default_cssid) {
+ return (channel_subsys->max_cssid > 0);
+ }
+
+ return true;
+}
+
+bool css_present(uint8_t cssid)
+{
+ return (channel_subsys->css[cssid] != NULL);
+}
+
+bool css_devno_used(uint8_t cssid, uint8_t ssid, uint16_t devno)
+{
+ if (!channel_subsys->css[cssid]) {
+ return false;
+ }
+ if (!channel_subsys->css[cssid]->sch_set[ssid]) {
+ return false;
+ }
+
+ return !!test_bit(devno,
+ channel_subsys->css[cssid]->sch_set[ssid]->devnos_used);
+}
+
+void css_subch_assign(uint8_t cssid, uint8_t ssid, uint16_t schid,
+ uint16_t devno, SubchDev *sch)
+{
+ CssImage *css;
+ SubchSet *s_set;
+
+ if (!channel_subsys->css[cssid]) {
+ fprintf(stderr,
+ "Suspicious call to %s (%x.%x.%04x) for non-existing css!\n",
+ __func__, cssid, ssid, schid);
+ return;
+ }
+ css = channel_subsys->css[cssid];
+
+ if (!css->sch_set[ssid]) {
+ css->sch_set[ssid] = g_malloc0(sizeof(SubchSet));
+ }
+ s_set = css->sch_set[ssid];
+
+ s_set->sch[schid] = sch;
+ if (sch) {
+ set_bit(schid, s_set->schids_used);
+ set_bit(devno, s_set->devnos_used);
+ } else {
+ clear_bit(schid, s_set->schids_used);
+ clear_bit(schid, s_set->devnos_used);
+ }
+}
+
+void css_queue_crw(uint8_t rsc, uint8_t erc, int chain, uint16_t rsid)
+{
+ CrwContainer *crw_cont;
+
+ /* TODO: Maybe use a static crw pool? */
+ crw_cont = g_try_malloc0(sizeof(CrwContainer));
+ if (!crw_cont) {
+ channel_subsys->crws_lost = true;
+ return;
+ }
+ crw_cont->crw.rsc = rsc;
+ crw_cont->crw.erc = erc;
+ crw_cont->crw.c = chain;
+ crw_cont->crw.rsid = rsid;
+ crw_cont->crw.r = channel_subsys->crws_lost ? 1 : 0;
+ channel_subsys->crws_lost = false;
+
+ QTAILQ_INSERT_TAIL(&channel_subsys->pending_crws, crw_cont, sibling);
+
+ if (channel_subsys->do_crw_mchk) {
+ S390CPU *cpu = s390_cpu_addr2state(0);
+
+ channel_subsys->do_crw_mchk = false;
+ /* Inject crw pending machine check. */
+ s390_crw_mchk(&cpu->env);
+ }
+}
+
+void css_generate_sch_crws(uint8_t cssid, uint8_t ssid, uint16_t schid,
+ int hotplugged, int add)
+{
+ uint8_t guest_cssid;
+ bool chain_crw;
+
+ if (add && !hotplugged) {
+ return;
+ }
+ if (channel_subsys->max_cssid == 0) {
+ /* Default cssid shows up as 0. */
+ guest_cssid = (cssid == channel_subsys->default_cssid) ? 0 : cssid;
+ } else {
+ /* Show real cssid to the guest. */
+ guest_cssid = cssid;
+ }
+ /*
+ * Only notify for higher subchannel sets/channel subsystems if the
+ * guest has enabled it.
+ */
+ if ((ssid > channel_subsys->max_ssid) ||
+ (guest_cssid > channel_subsys->max_cssid) ||
+ ((channel_subsys->max_cssid == 0) &&
+ (cssid != channel_subsys->default_cssid))) {
+ return;
+ }
+ chain_crw = (channel_subsys->max_ssid > 0) ||
+ (channel_subsys->max_cssid > 0);
+ css_queue_crw(CRW_RSC_SUBCH, CRW_ERC_IPI, chain_crw ? 1 : 0, schid);
+ if (chain_crw) {
+ css_queue_crw(CRW_RSC_SUBCH, CRW_ERC_IPI, 0,
+ (guest_cssid << 8) | (ssid << 4));
+ }
+}
+
+void css_generate_chp_crws(uint8_t cssid, uint8_t chpid)
+{
+ /* TODO */
+}
+
+int css_enable_mcsse(void)
+{
+ channel_subsys->max_cssid = MAX_CSSID;
+ return 0;
+}
+
+int css_enable_mss(void)
+{
+ channel_subsys->max_ssid = MAX_SSID;
+ return 0;
+}
+
+static void css_init(void)
+{
+ channel_subsys = g_malloc0(sizeof(*channel_subsys));
+ QTAILQ_INIT(&channel_subsys->pending_crws);
+ channel_subsys->do_crw_mchk = true;
+ channel_subsys->crws_lost = false;
+ channel_subsys->chnmon_active = false;
+}
+machine_init(css_init);
+
+void css_reset_sch(SubchDev *sch)
+{
+ PMCW *p = &sch->curr_status.pmcw;
+
+ p->intparm = 0;
+ p->isc = 0;
+ p->ena = 0;
+ p->lm = 0;
+ p->mme = 0;
+ p->mp = 0;
+ p->tf = 0;
+ p->dnv = 1;
+ p->dev = sch->devno;
+ p->pim = 0x80;
+ p->lpm = p->pim;
+ p->pnom = 0;
+ p->lpum = 0;
+ p->mbi = 0;
+ p->pom = 0xff;
+ p->pam = 0x80;
+ p->mbfc = 0;
+ p->xmwme = 0;
+ p->csense = 0;
+
+ memset(&sch->curr_status.scsw, 0, sizeof(sch->curr_status.scsw));
+ sch->curr_status.mba = 0;
+
+ sch->channel_prog = NULL;
+ sch->last_cmd = NULL;
+ sch->orb = NULL;
+}
+
+void css_reset(void)
+{
+ CrwContainer *crw_cont;
+
+ /* Clean up monitoring. */
+ channel_subsys->chnmon_active = false;
+ channel_subsys->chnmon_area = 0;
+
+ /* Clear pending CRWs. */
+ while ((crw_cont = QTAILQ_FIRST(&channel_subsys->pending_crws))) {
+ QTAILQ_REMOVE(&channel_subsys->pending_crws, crw_cont, sibling);
+ g_free(crw_cont);
+ }
+ channel_subsys->do_crw_mchk = true;
+ channel_subsys->crws_lost = false;
+
+ /* Reset maximum ids. */
+ channel_subsys->max_cssid = 0;
+ channel_subsys->max_ssid = 0;
+}
diff --git a/hw/s390x/css.h b/hw/s390x/css.h
new file mode 100644
index 0000000..638b801
--- /dev/null
+++ b/hw/s390x/css.h
@@ -0,0 +1,90 @@
+/*
+ * Channel subsystem structures and definitions.
+ *
+ * Copyright 2012 IBM Corp.
+ * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#ifndef CSS_H
+#define CSS_H
+
+#include "ioinst.h"
+
+/* Channel subsystem constants. */
+#define MAX_SCHID 65535
+#define MAX_SSID 3
+#define MAX_CSSID 254 /* 255 is reserved */
+#define MAX_CHPID 255
+
+#define MAX_CIWS 62
+
+typedef struct SenseId {
+ /* common part */
+ uint8_t reserved; /* always 0x'FF' */
+ uint16_t cu_type; /* control unit type */
+ uint8_t cu_model; /* control unit model */
+ uint16_t dev_type; /* device type */
+ uint8_t dev_model; /* device model */
+ uint8_t unused; /* padding byte */
+ /* extended part */
+ uint32_t ciw[MAX_CIWS]; /* variable # of CIWs */
+} QEMU_PACKED SenseId;
+
+/* Channel measurements, from linux/drivers/s390/cio/cmf.c. */
+struct cmb {
+ uint16_t ssch_rsch_count;
+ uint16_t sample_count;
+ uint32_t device_connect_time;
+ uint32_t function_pending_time;
+ uint32_t device_disconnect_time;
+ uint32_t control_unit_queuing_time;
+ uint32_t device_active_only_time;
+ uint32_t reserved[2];
+};
+
+struct cmbe {
+ uint32_t ssch_rsch_count;
+ uint32_t sample_count;
+ uint32_t device_connect_time;
+ uint32_t function_pending_time;
+ uint32_t device_disconnect_time;
+ uint32_t control_unit_queuing_time;
+ uint32_t device_active_only_time;
+ uint32_t device_busy_time;
+ uint32_t initial_command_response_time;
+ uint32_t reserved[7];
+};
+
+struct SubchDev {
+ /* channel-subsystem related things: */
+ uint8_t cssid;
+ uint8_t ssid;
+ uint16_t schid;
+ uint16_t devno;
+ SCHIB curr_status;
+ uint8_t sense_data[32];
+ CCW1 *channel_prog;
+ CCW1 *last_cmd;
+ ORB *orb;
+ /* transport-provided data: */
+ int (*ccw_cb) (SubchDev *, CCW1 *);
+ SenseId id;
+ void *driver_data;
+};
+
+typedef SubchDev *(*css_subch_cb_func)(uint8_t m, uint8_t cssid, uint8_t ssid,
+ uint16_t schid);
+int css_create_css_image(uint8_t cssid, bool default_image);
+bool css_devno_used(uint8_t cssid, uint8_t ssid, uint16_t devno);
+void css_subch_assign(uint8_t cssid, uint8_t ssid, uint16_t schid,
+ uint16_t devno, SubchDev *sch);
+void css_sch_build_virtual_schib(SubchDev *sch, uint8_t chpid, uint8_t type);
+void css_reset(void);
+void css_reset_sch(SubchDev *sch);
+void css_queue_crw(uint8_t rsc, uint8_t erc, int chain, uint16_t rsid);
+
+#endif
diff --git a/target-s390x/Makefile.objs b/target-s390x/Makefile.objs
index e728abf..3afb0b7 100644
--- a/target-s390x/Makefile.objs
+++ b/target-s390x/Makefile.objs
@@ -1,4 +1,4 @@
obj-y += translate.o helper.o cpu.o interrupt.o
obj-y += int_helper.o fpu_helper.o cc_helper.o mem_helper.o misc_helper.o
-obj-$(CONFIG_SOFTMMU) += machine.o
+obj-$(CONFIG_SOFTMMU) += machine.o ioinst.o
obj-$(CONFIG_KVM) += kvm.o
diff --git a/target-s390x/cpu.h b/target-s390x/cpu.h
index 5be6e83..ecf44cd 100644
--- a/target-s390x/cpu.h
+++ b/target-s390x/cpu.h
@@ -47,6 +47,11 @@
#define MMU_USER_IDX 1
#define MAX_EXT_QUEUE 16
+#define MAX_IO_QUEUE 16
+#define MAX_MCHK_QUEUE 16
+
+#define PSW_MCHK_MASK 0x0004000000000000
+#define PSW_IO_MASK 0x0200000000000000
typedef struct PSW {
uint64_t mask;
@@ -59,6 +64,17 @@ typedef struct ExtQueue {
uint32_t param64;
} ExtQueue;
+typedef struct IOQueue {
+ uint16_t id;
+ uint16_t nr;
+ uint32_t parm;
+ uint32_t word;
+} IOQueue;
+
+typedef struct MchkQueue {
+ uint16_t type;
+} MchkQueue;
+
typedef struct CPUS390XState {
uint64_t regs[16]; /* GP registers */
@@ -88,8 +104,16 @@ typedef struct CPUS390XState {
int pending_int;
ExtQueue ext_queue[MAX_EXT_QUEUE];
+ IOQueue io_queue[MAX_IO_QUEUE][8];
+ MchkQueue mchk_queue[MAX_MCHK_QUEUE];
int ext_index;
+ int io_index[8];
+ int mchk_index;
+
+ uint64_t ckc;
+ uint64_t cputm;
+ uint32_t todpr;
CPU_COMMON
@@ -103,6 +127,8 @@ typedef struct CPUS390XState {
QEMUTimer *tod_timer;
QEMUTimer *cpu_timer;
+
+ void *chsc_page;
} CPUS390XState;
#include "cpu-qom.h"
@@ -339,6 +365,112 @@ static inline unsigned s390_del_running_cpu(CPUS390XState *env)
void cpu_lock(void);
void cpu_unlock(void);
+typedef struct SubchDev SubchDev;
+typedef struct SCHIB SCHIB;
+typedef struct ORB ORB;
+
+#ifndef CONFIG_USER_ONLY
+SubchDev *css_find_subch(uint8_t m, uint8_t cssid, uint8_t ssid,
+ uint16_t schid);
+bool css_subch_visible(SubchDev *sch);
+void css_conditional_io_interrupt(SubchDev *sch);
+int css_do_stsch(SubchDev *sch, uint64_t addr);
+bool css_schid_final(uint8_t cssid, uint8_t ssid, uint16_t schid);
+int css_do_msch(SubchDev *sch, SCHIB *schib);
+int css_do_xsch(SubchDev *sch);
+int css_do_csch(SubchDev *sch);
+int css_do_hsch(SubchDev *sch);
+int css_do_ssch(SubchDev *sch, ORB *orb);
+int css_do_tsch(SubchDev *sch, uint64_t addr);
+int css_do_stcrw(uint64_t addr);
+int css_do_tpi(uint64_t addr, int lowcore);
+int css_collect_chp_desc(int m, uint8_t cssid, uint8_t f_chpid, uint8_t l_chpid,
+ int rfmt, void *buf);
+void css_do_schm(uint8_t mbk, int update, int dct, uint64_t mbo);
+int css_enable_mcsse(void);
+int css_enable_mss(void);
+int css_do_rsch(SubchDev *sch);
+int css_do_rchp(uint8_t cssid, uint8_t chpid);
+bool css_present(uint8_t cssid);
+#else
+static inline SubchDev *css_find_subch(uint8_t m, uint8_t cssid, uint8_t ssid,
+ uint16_t schid)
+{
+ return NULL;
+}
+static inline bool css_subch_visible(SubchDev *sch)
+{
+ return false;
+}
+static inline void css_conditional_io_interrupt(SubchDev *sch)
+{
+}
+static inline int css_do_stsch(SubchDev *sch, uint64_t addr)
+{
+ return -ENODEV;
+}
+static inline bool css_schid_final(uint8_t cssid, uint8_t ssid, uint16_t schid)
+{
+ return true;
+}
+static inline int css_do_msch(SubchDev *sch, SCHIB *schib)
+{
+ return -ENODEV;
+}
+static inline int css_do_xsch(SubchDev *sch)
+{
+ return -ENODEV;
+}
+static inline int css_do_csch(SubchDev *sch)
+{
+ return -ENODEV;
+}
+static inline int css_do_hsch(SubchDev *sch)
+{
+ return -ENODEV;
+}
+static inline int css_do_ssch(SubchDev *sch, ORB *orb)
+{
+ return -ENODEV;
+}
+static inline int css_do_tsch(SubchDev *sch, uint64_t addr)
+{
+ return -ENODEV;
+}
+static inline int css_do_stcrw(uint64_t addr)
+{
+ return 1;
+}
+static inline int css_do_tpi(uint64_t addr, int lowcore)
+{
+ return 0;
+}
+static inline int css_collect_chp_desc(int m, uint8_t cssid, uint8_t f_chpid,
+ int rfmt, uint8_t l_chpid, void *buf)
+{
+ return 0;
+}
+static inline void css_do_schm(uint8_t mbk, int update, int dct, uint64_t mbo)
+{
+}
+static inline int css_enable_mss(void)
+{
+ return -EINVAL;
+}
+static inline int css_do_rsch(SubchDev *sch)
+{
+ return -ENODEV;
+}
+static inline int css_do_rchp(uint8_t cssid, uint8_t chpid)
+{
+ return -ENODEV;
+}
+static inline bool css_present(uint8_t cssid)
+{
+ return false;
+}
+#endif
+
static inline void cpu_set_tls(CPUS390XState *env, target_ulong newtls)
{
env->aregs[0] = newtls >> 32;
@@ -364,12 +496,16 @@ static inline void cpu_set_tls(CPUS390XState *env, target_ulong newtls)
#define EXCP_EXT 1 /* external interrupt */
#define EXCP_SVC 2 /* supervisor call (syscall) */
#define EXCP_PGM 3 /* program interruption */
+#define EXCP_IO 7 /* I/O interrupt */
+#define EXCP_MCHK 8 /* machine check */
#endif /* CONFIG_USER_ONLY */
#define INTERRUPT_EXT (1 << 0)
#define INTERRUPT_TOD (1 << 1)
#define INTERRUPT_CPUTIMER (1 << 2)
+#define INTERRUPT_IO (1 << 3)
+#define INTERRUPT_MCHK (1 << 4)
/* Program Status Word. */
#define S390_PSWM_REGNUM 0
@@ -977,6 +1113,45 @@ static inline void cpu_inject_ext(CPUS390XState *env, uint32_t code, uint32_t pa
cpu_interrupt(env, CPU_INTERRUPT_HARD);
}
+static inline void cpu_inject_io(CPUS390XState *env, uint16_t subchannel_id,
+ uint16_t subchannel_number,
+ uint32_t io_int_parm, uint32_t io_int_word)
+{
+ int isc = ffs(io_int_word << 2) - 1;
+
+ if (env->io_index[isc] == MAX_IO_QUEUE - 1) {
+ /* ugh - can't queue anymore. Let's drop. */
+ return;
+ }
+
+ env->io_index[isc]++;
+ assert(env->io_index[isc] < MAX_IO_QUEUE);
+
+ env->io_queue[env->io_index[isc]][isc].id = subchannel_id;
+ env->io_queue[env->io_index[isc]][isc].nr = subchannel_number;
+ env->io_queue[env->io_index[isc]][isc].parm = io_int_parm;
+ env->io_queue[env->io_index[isc]][isc].word = io_int_word;
+
+ env->pending_int |= INTERRUPT_IO;
+ cpu_interrupt(env, CPU_INTERRUPT_HARD);
+}
+
+static inline void cpu_inject_crw_mchk(CPUS390XState *env)
+{
+ if (env->mchk_index == MAX_MCHK_QUEUE - 1) {
+ /* ugh - can't queue anymore. Let's drop. */
+ return;
+ }
+
+ env->mchk_index++;
+ assert(env->mchk_index < MAX_MCHK_QUEUE);
+
+ env->mchk_queue[env->mchk_index].type = 1;
+
+ env->pending_int |= INTERRUPT_MCHK;
+ cpu_interrupt(env, CPU_INTERRUPT_HARD);
+}
+
static inline bool cpu_has_work(CPUS390XState *env)
{
return (env->interrupt_request & CPU_INTERRUPT_HARD) &&
@@ -996,5 +1171,62 @@ uint32_t set_cc_nz_f64(float64 v);
/* misc_helper.c */
void program_interrupt(CPUS390XState *env, uint32_t code, int ilc);
+int css_handle_sch_io(uint32_t sch_id, uint8_t func, uint64_t orb, void *scsw,
+ void *pmcw);
+void css_generate_sch_crws(uint8_t cssid, uint8_t ssid, uint16_t schid,
+ int hotplugged, int add);
+void css_generate_chp_crws(uint8_t cssid, uint8_t chpid);
+void css_inject_io(uint8_t cssid, uint8_t ssid, uint16_t schid, uint8_t isc,
+ uint32_t intparm, int unsolicited);
+#ifdef CONFIG_KVM
+int kvm_s390_io_interrupt(CPUS390XState *env, uint16_t subchannel_id,
+ uint16_t subchannel_nr, uint32_t io_int_parm,
+ uint32_t io_int_word);
+int kvm_s390_crw_mchk(CPUS390XState *env);
+void kvm_s390_enable_css_support(CPUS390XState *env);
+#else
+static inline int kvm_s390_io_interrupt(CPUS390XState *env,
+ uint16_t subchannel_id,
+ uint16_t subchannel_nr,
+ uint32_t io_int_parm,
+ uint32_t io_int_word)
+{
+ return -EOPNOTSUPP;
+}
+static inline int kvm_s390_crw_mchk(CPUS390XState *env)
+{
+ return -EOPNOTSUPP;
+}
+static inline void kvm_s390_enable_css_support(CPUS390XState *env)
+{
+}
+#endif
+
+static inline void s390_io_interrupt(CPUS390XState *env,
+ uint16_t subchannel_id,
+ uint16_t subchannel_nr,
+ uint32_t io_int_parm,
+ uint32_t io_int_word)
+{
+ int ret;
+
+ ret = kvm_s390_io_interrupt(env, subchannel_id, subchannel_nr, io_int_parm,
+ io_int_word);
+ if (ret == -EOPNOTSUPP) {
+ cpu_inject_io(env, subchannel_id, subchannel_nr, io_int_parm,
+ io_int_word);
+ }
+}
+
+static inline void s390_crw_mchk(CPUS390XState *env)
+{
+ int ret;
+
+ ret = kvm_s390_crw_mchk(env);
+
+ if (ret == -EOPNOTSUPP) {
+ cpu_inject_crw_mchk(env);
+ }
+}
#endif
diff --git a/target-s390x/helper.c b/target-s390x/helper.c
index b7b812a..8e3930a 100644
--- a/target-s390x/helper.c
+++ b/target-s390x/helper.c
@@ -574,12 +574,145 @@ static void do_ext_interrupt(CPUS390XState *env)
load_psw(env, mask, addr);
}
+static void do_io_interrupt(CPUS390XState *env)
+{
+ uint64_t mask, addr;
+ LowCore *lowcore;
+ hwaddr len = TARGET_PAGE_SIZE;
+ IOQueue *q;
+ uint8_t isc;
+ int disable = 1;
+ int found = 0;
+
+ if (!(env->psw.mask & PSW_MASK_IO)) {
+ cpu_abort(env, "I/O int w/o I/O mask\n");
+ }
+
+
+ for (isc = 0; isc < 8; isc++) {
+ if (env->io_index[isc] < 0) {
+ continue;
+ }
+ if (env->io_index[isc] > MAX_IO_QUEUE) {
+ cpu_abort(env, "I/O queue overrun for isc %d: %d\n",
+ isc, env->io_index[isc]);
+ }
+
+ q = &env->io_queue[env->io_index[isc]][isc];
+ if (!(env->cregs[6] & q->word)) {
+ disable = 0;
+ continue;
+ }
+ found = 1;
+ lowcore = cpu_physical_memory_map(env->psa, &len, 1);
+
+ lowcore->subchannel_id = cpu_to_be16(q->id);
+ lowcore->subchannel_nr = cpu_to_be16(q->nr);
+ lowcore->io_int_parm = cpu_to_be32(q->parm);
+ lowcore->io_int_word = cpu_to_be32(q->word);
+ lowcore->io_old_psw.mask = cpu_to_be64(get_psw_mask(env));
+ lowcore->io_old_psw.addr = cpu_to_be64(env->psw.addr);
+ mask = be64_to_cpu(lowcore->io_new_psw.mask);
+ addr = be64_to_cpu(lowcore->io_new_psw.addr);
+
+ cpu_physical_memory_unmap(lowcore, len, 1, len);
+
+ env->io_index[isc]--;
+ if (env->io_index >= 0) {
+ disable = 0;
+ }
+ break;
+ }
+
+ if (disable) {
+ env->pending_int &= ~INTERRUPT_IO;
+ }
+ if (found) {
+ DPRINTF("%s: %" PRIx64 " %" PRIx64 "\n", __func__,
+ env->psw.mask, env->psw.addr);
+
+ load_psw(env, mask, addr);
+ }
+}
+
+static void do_mchk_interrupt(CPUS390XState *env)
+{
+ uint64_t mask, addr;
+ LowCore *lowcore;
+ hwaddr len = TARGET_PAGE_SIZE;
+ MchkQueue *q;
+ int i;
+
+ if (!(env->psw.mask & PSW_MASK_MCHECK)) {
+ cpu_abort(env, "Machine check w/o mchk mask\n");
+ }
+
+ if (env->mchk_index < 0 || env->mchk_index > MAX_MCHK_QUEUE) {
+ cpu_abort(env, "Mchk queue overrun: %d\n", env->mchk_index);
+ }
+
+ q = &env->mchk_queue[env->mchk_index];
+
+ if (q->type != 1) {
+ /* Don't know how to handle this... */
+ cpu_abort(env, "Unknown machine check type %d\n", q->type);
+ }
+ if (!(env->cregs[14] & (1 << 28))) {
+ /* CRW machine checks disabled */
+ return;
+ }
+
+ lowcore = cpu_physical_memory_map(env->psa, &len, 1);
+
+ for (i = 0; i < 16; i++) {
+ lowcore->floating_pt_save_area[i] = cpu_to_be64(env->fregs[i].ll);
+ lowcore->gpregs_save_area[i] = cpu_to_be64(env->regs[i]);
+ lowcore->access_regs_save_area[i] = cpu_to_be32(env->aregs[i]);
+ lowcore->cregs_save_area[i] = cpu_to_be64(env->cregs[i]);
+ }
+ lowcore->prefixreg_save_area = cpu_to_be32(env->psa);
+ lowcore->fpt_creg_save_area = cpu_to_be32(env->fpc);
+ lowcore->tod_progreg_save_area = cpu_to_be32(env->todpr);
+ lowcore->cpu_timer_save_area[0] = cpu_to_be32(env->cputm >> 32);
+ lowcore->cpu_timer_save_area[1] =
+ cpu_to_be32(env->cputm & 0x00000000ffffffff);
+ lowcore->clock_comp_save_area[0] = cpu_to_be32(env->ckc >> 32);
+ lowcore->clock_comp_save_area[1] =
+ cpu_to_be32(env->ckc & 0x00000000ffffffff);
+
+ lowcore->mcck_interruption_code[0] = cpu_to_be32(0x00400f1d);
+ lowcore->mcck_interruption_code[1] = cpu_to_be32(0x40330000);
+ lowcore->mcck_old_psw.mask = cpu_to_be64(get_psw_mask(env));
+ lowcore->mcck_old_psw.addr = cpu_to_be64(env->psw.addr);
+ mask = be64_to_cpu(lowcore->mcck_new_psw.mask);
+ addr = be64_to_cpu(lowcore->mcck_new_psw.addr);
+
+ cpu_physical_memory_unmap(lowcore, len, 1, len);
+
+ env->mchk_index--;
+ if (env->mchk_index == -1) {
+ env->pending_int &= ~INTERRUPT_MCHK;
+ }
+
+ DPRINTF("%s: %" PRIx64 " %" PRIx64 "\n", __func__,
+ env->psw.mask, env->psw.addr);
+
+ load_psw(env, mask, addr);
+}
+
void do_interrupt(CPUS390XState *env)
{
qemu_log_mask(CPU_LOG_INT, "%s: %d at pc=%" PRIx64 "\n",
__func__, env->exception_index, env->psw.addr);
s390_add_running_cpu(env);
+ /* handle machine checks */
+ if ((env->psw.mask & PSW_MASK_MCHECK) &&
+ (env->exception_index == -1)) {
+ if (env->pending_int & INTERRUPT_MCHK) {
+ env->exception_index = EXCP_MCHK;
+ }
+ }
/* handle external interrupts */
if ((env->psw.mask & PSW_MASK_EXT) &&
env->exception_index == -1) {
@@ -598,6 +731,13 @@ void do_interrupt(CPUS390XState *env)
env->pending_int &= ~INTERRUPT_TOD;
}
}
+ /* handle I/O interrupts */
+ if ((env->psw.mask & PSW_MASK_IO) &&
+ (env->exception_index == -1)) {
+ if (env->pending_int & INTERRUPT_IO) {
+ env->exception_index = EXCP_IO;
+ }
+ }
switch (env->exception_index) {
case EXCP_PGM:
@@ -609,6 +749,12 @@ void do_interrupt(CPUS390XState *env)
case EXCP_EXT:
do_ext_interrupt(env);
break;
+ case EXCP_IO:
+ do_io_interrupt(env);
+ break;
+ case EXCP_MCHK:
+ do_mchk_interrupt(env);
+ break;
}
env->exception_index = -1;
diff --git a/target-s390x/ioinst.c b/target-s390x/ioinst.c
new file mode 100644
index 0000000..6356681
--- /dev/null
+++ b/target-s390x/ioinst.c
@@ -0,0 +1,737 @@
+/*
+ * I/O instructions for S/390
+ *
+ * Copyright 2012 IBM Corp.
+ * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+
+#include "cpu.h"
+#include "ioinst.h"
+
+#ifdef DEBUG_IOINST
+#define dprintf(fmt, ...) \
+ do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
+#else
+#define dprintf(fmt, ...) \
+ do { } while (0)
+#endif
+
+/* Special handling for the prefix page. */
+static void *s390_get_address(CPUS390XState *env, ram_addr_t guest_addr)
+{
+ if (guest_addr < 8192) {
+ guest_addr += env->psa;
+ } else if ((env->psa <= guest_addr) && (guest_addr < env->psa + 8192)) {
+ guest_addr -= env->psa;
+ }
+
+ return qemu_get_ram_ptr(guest_addr);
+}
+
+int ioinst_disassemble_sch_ident(uint32_t value, int *m, int *cssid, int *ssid,
+ int *schid)
+{
+ if (!(value & IOINST_SCHID_ONE)) {
+ return -EINVAL;
+ }
+ if (!(value & IOINST_SCHID_M)) {
+ if (value & IOINST_SCHID_CSSID) {
+ return -EINVAL;
+ }
+ *cssid = 0;
+ *m = 0;
+ } else {
+ *cssid = (value & IOINST_SCHID_CSSID) >> 24;
+ *m = 1;
+ }
+ *ssid = (value & IOINST_SCHID_SSID) >> 17;
+ *schid = value & IOINST_SCHID_NR;
+ return 0;
+}
+
+int ioinst_handle_xsch(CPUS390XState *env, uint64_t reg1)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ int ret = -ENODEV;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: xsch (%x.%x.%04x)\n", cssid, ssid, schid);
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ ret = css_do_xsch(sch);
+ }
+ switch (ret) {
+ case -ENODEV:
+ cc = 3;
+ break;
+ case -EBUSY:
+ cc = 2;
+ break;
+ case 0:
+ cc = 0;
+ break;
+ default:
+ cc = 1;
+ break;
+ }
+
+ return cc;
+}
+
+int ioinst_handle_csch(CPUS390XState *env, uint64_t reg1)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ int ret = -ENODEV;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: csch (%x.%x.%04x)\n", cssid, ssid, schid);
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ ret = css_do_csch(sch);
+ }
+ if (ret == -ENODEV) {
+ cc = 3;
+ } else {
+ cc = 0;
+ }
+ return cc;
+}
+
+int ioinst_handle_hsch(CPUS390XState *env, uint64_t reg1)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ int ret = -ENODEV;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: hsch (%x.%x.%04x)\n", cssid, ssid, schid);
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ ret = css_do_hsch(sch);
+ }
+ switch (ret) {
+ case -ENODEV:
+ cc = 3;
+ break;
+ case -EBUSY:
+ cc = 2;
+ break;
+ case 0:
+ cc = 0;
+ break;
+ default:
+ cc = 1;
+ break;
+ }
+
+ return cc;
+}
+
+static int ioinst_schib_valid(SCHIB *schib)
+{
+ if ((schib->pmcw.zeroes0 & 0x3) != 0) {
+ return 0;
+ }
+ if ((schib->pmcw.zeroes1 != 0) || (schib->pmcw.zeroes2 != 0)) {
+ return 0;
+ }
+ /* Disallow extended measurements for now. */
+ if (schib->pmcw.xmwme) {
+ return 0;
+ }
+ return 1;
+}
+
+int ioinst_handle_msch(CPUS390XState *env, uint64_t reg1, uint32_t ipb)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ SCHIB *schib;
+ uint64_t addr;
+ int ret = -ENODEV;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: msch (%x.%x.%04x)\n", cssid, ssid, schid);
+ addr = ipb >> 28;
+ if (addr > 0) {
+ addr = env->regs[addr];
+ }
+ addr += (ipb & 0xfff0000) >> 16;
+ schib = s390_get_address(env, addr);
+ if (!schib) {
+ program_interrupt(env, PGM_SPECIFICATION, 2);
+ return -EIO;
+ }
+ if (!ioinst_schib_valid(schib)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ ret = css_do_msch(sch, schib);
+ }
+ switch (ret) {
+ case -ENODEV:
+ cc = 3;
+ break;
+ case -EBUSY:
+ cc = 2;
+ break;
+ case 0:
+ cc = 0;
+ break;
+ default:
+ cc = 1;
+ break;
+ }
+
+ return cc;
+}
+
+static int ioinst_orb_valid(ORB *orb)
+{
+ if (orb->zero0 != 0) {
+ return 0;
+ }
+ if (orb->zero1 != 0) {
+ return 0;
+ }
+ if ((orb->cpa & 0x80000000) != 0) {
+ return 0;
+ }
+ return 1;
+}
+
+int ioinst_handle_ssch(CPUS390XState *env, uint64_t reg1, uint32_t ipb)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ ORB *orb;
+ uint64_t addr;
+ int ret = -ENODEV;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: ssch (%x.%x.%04x)\n", cssid, ssid, schid);
+ addr = ipb >> 28;
+ if (addr > 0) {
+ addr = env->regs[addr];
+ }
+ addr += (ipb & 0xfff0000) >> 16;
+ orb = s390_get_address(env, addr);
+ if (!orb) {
+ program_interrupt(env, PGM_SPECIFICATION, 2);
+ return -EIO;
+ }
+ if (!ioinst_orb_valid(orb)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ ret = css_do_ssch(sch, orb);
+ }
+ switch (ret) {
+ case -ENODEV:
+ cc = 3;
+ break;
+ case -EBUSY:
+ cc = 2;
+ break;
+ case 0:
+ cc = 0;
+ break;
+ default:
+ cc = 1;
+ break;
+ }
+
+ return cc;
+}
+
+int ioinst_handle_stcrw(CPUS390XState *env, uint32_t ipb)
+{
+ CRW *crw;
+ uint64_t addr;
+ int cc;
+
+ addr = ipb >> 28;
+ if (addr > 0) {
+ addr = env->regs[addr];
+ }
+ addr += (ipb & 0xfff0000) >> 16;
+ crw = s390_get_address(env, addr);
+ if (!crw) {
+ program_interrupt(env, PGM_SPECIFICATION, 2);
+ return -EIO;
+ }
+ if (addr < 8192) {
+ addr += env->psa;
+ } else if ((env->psa <= addr) && (addr < env->psa + 8192)) {
+ addr -= env->psa;
+ }
+ cc = css_do_stcrw(addr);
+ /* 0 - crw stored, 1 - zeroes stored */
+ return cc;
+}
+
+int ioinst_handle_stsch(CPUS390XState *env, uint64_t reg1, uint32_t ipb)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ uint64_t addr;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: stsch (%x.%x.%04x)\n", cssid, ssid, schid);
+ addr = ipb >> 28;
+ if (addr > 0) {
+ addr = env->regs[addr];
+ }
+ addr += (ipb & 0xfff0000) >> 16;
+ if (addr < 8192) {
+ addr += env->psa;
+ } else if ((env->psa <= addr) && (addr < env->psa + 8192)) {
+ addr -= env->psa;
+ }
+ if (!qemu_get_ram_ptr(addr)) {
+ program_interrupt(env, PGM_SPECIFICATION, 2);
+ return -EIO;
+ }
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch) {
+ if (css_subch_visible(sch)) {
+ css_do_stsch(sch, addr);
+ cc = 0;
+ } else {
+ /* Indicate no more subchannels in this css/ss */
+ cc = 3;
+ }
+ } else {
+ if (css_schid_final(cssid, ssid, schid)) {
+ cc = 3; /* No more subchannels in this css/ss */
+ } else {
+ int i;
+
+ /* Store an empty schib. */
+ for (i = 0; i < sizeof(SCHIB); i++) {
+ stb_phys(addr + i, 0);
+ }
+ cc = 0;
+ }
+ }
+ return cc;
+}
+
+int ioinst_handle_tsch(CPUS390XState *env, uint64_t reg1, uint32_t ipb)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ IRB *irb;
+ uint64_t addr;
+ int ret = -ENODEV;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: tsch (%x.%x.%04x)\n", cssid, ssid, schid);
+ addr = ipb >> 28;
+ if (addr > 0) {
+ addr = env->regs[addr];
+ }
+ addr += (ipb & 0xfff0000) >> 16;
+ irb = s390_get_address(env, addr);
+ if (!irb) {
+ program_interrupt(env, PGM_SPECIFICATION, 2);
+ return -EIO;
+ }
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ if (addr < 8192) {
+ addr += env->psa;
+ } else if ((env->psa <= addr) && (addr < env->psa + 8192)) {
+ addr -= env->psa;
+ }
+ ret = css_do_tsch(sch, addr);
+ /* 0 - status pending, 1 - not status pending */
+ cc = ret;
+ } else {
+ cc = 3;
+ }
+ return cc;
+}
+
+typedef struct ChscReq {
+ uint16_t len;
+ uint16_t command;
+ uint32_t param0;
+ uint32_t param1;
+ uint32_t param2;
+} QEMU_PACKED ChscReq;
+
+typedef struct ChscResp {
+ uint16_t len;
+ uint16_t code;
+ uint32_t param;
+ char data[0];
+} QEMU_PACKED ChscResp;
+
+#define CHSC_SCPD 0x0002
+#define CHSC_SCSC 0x0010
+#define CHSC_SDA 0x0031
+
+static void ioinst_handle_chsc_scpd(ChscReq *req, ChscResp *res)
+{
+ uint16_t resp_code;
+ int rfmt;
+ uint16_t cssid;
+ uint8_t f_chpid, l_chpid;
+ int desc_size;
+ int m;
+
+ rfmt = (req->param0 & 0x00000f00) >> 8;
+ if ((rfmt == 0) || (rfmt == 1)) {
+ rfmt = (req->param0 & 0x10000000) >> 28;
+ }
+ if ((req->len != 0x0010) || (req->param0 & 0xc000f000) ||
+ (req->param1 & 0xffffff00) || req->param2) {
+ resp_code = 0x0003;
+ goto out_err;
+ }
+ if (req->param0 & 0x0f000000) {
+ resp_code = 0x0007;
+ goto out_err;
+ }
+ cssid = (req->param0 & 0x00ff0000) >> 16;
+ m = req->param0 & 0x20000000;
+ if (cssid != 0) {
+ if (!m || !css_present(cssid)) {
+ resp_code = 0x0008;
+ goto out_err;
+ }
+ }
+ f_chpid = req->param0 & 0x000000ff;
+ l_chpid = req->param1 & 0x000000ff;
+ if (l_chpid < f_chpid) {
+ resp_code = 0x0003;
+ goto out_err;
+ }
+ desc_size = css_collect_chp_desc(m, cssid, f_chpid, l_chpid, rfmt,
+ &res->data);
+ res->code = 0x0001;
+ res->len = 8 + desc_size;
+ res->param = rfmt;
+ return;
+
+ out_err:
+ res->code = resp_code;
+ res->len = 8;
+ res->param = rfmt;
+}
+
+static void ioinst_handle_chsc_scsc(ChscReq *req, ChscResp *res)
+{
+ uint8_t cssid;
+ uint16_t resp_code;
+ uint32_t general_chars[510];
+ uint32_t chsc_chars[508];
+
+ if (req->param0 & 0x000f0000) {
+ resp_code = 0x0007;
+ goto out_err;
+ }
+ cssid = (req->param0 & 0x0000ff00) >> 8;
+ if (cssid != 0) {
+ if (!(req->param0 & 0x20000000) || !css_present(cssid)) {
+ resp_code = 0x0008;
+ goto out_err;
+ }
+ }
+ if ((req->param0 & 0xdff000ff) || req->param1 || req->param2) {
+ resp_code = 0x0003;
+ goto out_err;
+ }
+ res->code = 0x0001;
+ res->len = 4080;
+ res->param = 0;
+
+ memset(general_chars, 0, sizeof(general_chars));
+ memset(chsc_chars, 0, sizeof(chsc_chars));
+
+ general_chars[0] = 0x03000000;
+ general_chars[1] = 0x00059000;
+
+ chsc_chars[0] = 0x40000000;
+ chsc_chars[3] = 0x00040000;
+
+ memcpy(res->data, general_chars, sizeof(general_chars));
+ memcpy(res->data + sizeof(general_chars), chsc_chars, sizeof(chsc_chars));
+ return;
+
+ out_err:
+ res->code = resp_code;
+ res->len = 8;
+ res->param = 0;
+}
+
+#define CHSC_SDA_OC_MCSSE 0x0
+#define CHSC_SDA_OC_MSS 0x2
+static void ioinst_handle_chsc_sda(ChscReq *req, ChscResp *res)
+{
+ uint16_t resp_code = 0x0001;
+ uint16_t oc;
+ int ret;
+
+ if ((req->len != 0x0400) || (req->param0 & 0xf0ff0000)) {
+ resp_code = 0x0003;
+ goto out;
+ }
+
+ if (req->param0 & 0x0f000000) {
+ resp_code = 0x0007;
+ goto out;
+ }
+
+ oc = req->param0 & 0x0000ffff;
+ switch (oc) {
+ case CHSC_SDA_OC_MCSSE:
+ ret = css_enable_mcsse();
+ if (ret == -EINVAL) {
+ resp_code = 0x0101;
+ goto out;
+ }
+ break;
+ case CHSC_SDA_OC_MSS:
+ ret = css_enable_mss();
+ if (ret == -EINVAL) {
+ resp_code = 0x0101;
+ goto out;
+ }
+ break;
+ default:
+ resp_code = 0x0003;
+ goto out;
+ }
+
+out:
+ res->code = resp_code;
+ res->len = 8;
+ res->param = 0;
+}
+
+static void ioinst_handle_chsc_unimplemented(ChscResp *res)
+{
+ res->len = 8;
+ res->code = 0x0004;
+ res->param = 0;
+}
+
+int ioinst_handle_chsc(CPUS390XState *env, uint32_t ipb)
+{
+ ChscReq *req;
+ ChscResp *res;
+ uint64_t addr;
+ int reg;
+
+ dprintf("%s\n", "IOINST: CHSC");
+ reg = (ipb >> 20) & 0x00f;
+ addr = env->regs[reg];
+ req = s390_get_address(env, addr);
+ if (!req) {
+ program_interrupt(env, PGM_SPECIFICATION, 2);
+ return -EIO;
+ }
+ if (!env->chsc_page) {
+ env->chsc_page = g_malloc0(TARGET_PAGE_SIZE);
+ } else {
+ memset(env->chsc_page, 0, TARGET_PAGE_SIZE);
+ }
+ res = env->chsc_page;
+ dprintf("IOINST: CHSC: command 0x%04x, len=0x%04x\n",
+ req->command, req->len);
+ switch (req->command) {
+ case CHSC_SCSC:
+ ioinst_handle_chsc_scsc(req, res);
+ break;
+ case CHSC_SCPD:
+ ioinst_handle_chsc_scpd(req, res);
+ break;
+ case CHSC_SDA:
+ ioinst_handle_chsc_sda(req, res);
+ break;
+ default:
+ ioinst_handle_chsc_unimplemented(res);
+ break;
+ }
+ if (addr < 8192) {
+ addr += env->psa;
+ } else if ((env->psa <= addr) && (addr < env->psa + 8192)) {
+ addr -= env->psa;
+ }
+ cpu_physical_memory_write(addr + req->len, res, res->len);
+ return 0;
+}
+
+int ioinst_handle_tpi(CPUS390XState *env, uint32_t ipb)
+{
+ uint64_t addr;
+ int lowcore;
+
+ dprintf("%s\n", "IOINST: tpi");
+ addr = ipb >> 28;
+ if (addr > 0) {
+ addr = env->regs[addr];
+ }
+ addr += (ipb & 0xfff0000) >> 16;
+ lowcore = addr ? 0 : 1;
+ if (addr < 8192) {
+ addr += env->psa;
+ } else if ((env->psa <= addr) && (addr < env->psa + 8192)) {
+ addr -= env->psa;
+ }
+ return css_do_tpi(addr, lowcore);
+}
+
+int ioinst_handle_schm(CPUS390XState *env, uint64_t reg1, uint64_t reg2,
+ uint32_t ipb)
+{
+ uint8_t mbk;
+ int update;
+ int dct;
+
+ dprintf("%s\n", "IOINST: schm");
+
+ if (reg1 & 0x000000000ffffffc) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+
+ mbk = (reg1 & 0x00000000f0000000) >> 28;
+ update = (reg1 & 0x0000000000000002) >> 1;
+ dct = reg1 & 0x0000000000000001;
+
+ if (update && (reg2 & 0x0000000000000fff)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+
+ css_do_schm(mbk, update, dct, update ? reg2 : 0);
+
+ return 0;
+}
+
+int ioinst_handle_rsch(CPUS390XState *env, uint64_t reg1)
+{
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
+ int ret = -ENODEV;
+ int cc;
+
+ if (ioinst_disassemble_sch_ident(reg1, &m, &cssid, &ssid, &schid)) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ dprintf("IOINST: rsch (%x.%x.%04x)\n", cssid, ssid, schid);
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ ret = css_do_rsch(sch);
+ }
+ switch (ret) {
+ case -ENODEV:
+ cc = 3;
+ break;
+ case -EINVAL:
+ cc = 2;
+ break;
+ case 0:
+ cc = 0;
+ break;
+ default:
+ cc = 1;
+ break;
+ }
+
+ return cc;
+
+}
+
+int ioinst_handle_rchp(CPUS390XState *env, uint64_t reg1)
+{
+ int cc;
+ uint8_t cssid;
+ uint8_t chpid;
+ int ret;
+
+ if (reg1 & 0xff00ff00) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+
+ cssid = (reg1 >> 16) & 0xff;
+ chpid = reg1 & 0xff;
+ dprintf("IOINST: rchp (%x.%02x)\n", cssid, chpid);
+
+ ret = css_do_rchp(cssid, chpid);
+
+ switch (ret) {
+ case -ENODEV:
+ cc = 3;
+ break;
+ case -EBUSY:
+ cc = 2;
+ break;
+ case 0:
+ cc = 0;
+ break;
+ default:
+ /* Invalid channel subsystem. */
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+
+ return cc;
+}
+
+int ioinst_handle_sal(CPUS390XState *env, uint64_t reg1)
+{
+ /* We do not provide address limit checking, so let's suppress it. */
+ if (env->regs[1] & 0x000000008000ffff) {
+ program_interrupt(env, PGM_OPERAND, 2);
+ return -EIO;
+ }
+ return 0;
+}
diff --git a/target-s390x/ioinst.h b/target-s390x/ioinst.h
new file mode 100644
index 0000000..9810fc5
--- /dev/null
+++ b/target-s390x/ioinst.h
@@ -0,0 +1,213 @@
+/*
+ * S/390 channel I/O instructions
+ *
+ * Copyright 2012 IBM Corp.
+ * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+*/
+
+#ifndef IOINST_S390X_H
+#define IOINST_S390X_H
+/*
+ * Channel I/O related definitions, as defined in the Principles
+ * Of Operation (and taken from the Linux implementation).
+ */
+
+/* subchannel status word (command mode only) */
+typedef struct SCSW {
+ uint32_t key:4;
+ uint32_t sctl:1;
+ uint32_t eswf:1;
+ uint32_t cc:2;
+ uint32_t fmt:1;
+ uint32_t pfch:1;
+ uint32_t isic:1;
+ uint32_t alcc:1;
+ uint32_t ssi:1;
+ uint32_t zcc:1;
+ uint32_t ectl:1;
+ uint32_t pno:1;
+ uint32_t res:1;
+ uint32_t fctl:3;
+ uint32_t actl:7;
+ uint32_t stctl:5;
+ uint32_t cpa;
+ uint32_t dstat:8;
+ uint32_t cstat:8;
+ uint32_t count:16;
+} SCSW;
+
+/* path management control word */
+typedef struct PMCW {
+ uint32_t intparm;
+ uint32_t qf:1;
+ uint32_t w:1;
+ uint32_t isc:3;
+ uint32_t zeroes0:3;
+ uint32_t ena:1;
+ uint32_t lm:2;
+ uint32_t mme:2;
+ uint32_t mp:1;
+ uint32_t tf:1;
+ uint32_t dnv:1;
+ uint32_t dev:16;
+ uint8_t lpm;
+ uint8_t pnom;
+ uint8_t lpum;
+ uint8_t pim;
+ uint16_t mbi;
+ uint8_t pom;
+ uint8_t pam;
+ uint8_t chpid[8];
+ uint32_t zeroes1:8;
+ uint32_t st:3;
+ uint32_t zeroes2:18;
+ uint32_t mbfc:1;
+ uint32_t xmwme:1;
+ uint32_t csense:1;
+} PMCW;
+
+/* subchannel information block */
+struct SCHIB {
+ PMCW pmcw;
+ SCSW scsw;
+ uint64_t mba;
+ uint8_t mda[4];
+};
+
+/* interruption response block */
+typedef struct IRB {
+ SCSW scsw;
+ uint32_t esw[5];
+ uint32_t ecw[8];
+ uint32_t emw[8];
+} IRB;
+
+/* operation request block */
+struct ORB {
+ uint32_t intparm;
+ uint32_t key:4;
+ uint32_t spnd:1;
+ uint32_t str:1;
+ uint32_t mod:1;
+ uint32_t sync:1;
+ uint32_t fmt:1;
+ uint32_t pfch:1;
+ uint32_t isic:1;
+ uint32_t alcc:1;
+ uint32_t ssic:1;
+ uint32_t zero0:1;
+ uint32_t c64:1;
+ uint32_t i2k:1;
+ uint32_t lpm:8;
+ uint32_t ils:1;
+ uint32_t midaw:1;
+ uint32_t zero1:5;
+ uint32_t orbx:1;
+ uint32_t cpa;
+};
+
+/* channel command word (type 1) */
+typedef struct CCW1 {
+ uint8_t cmd_code;
+ uint8_t flags;
+ uint16_t count;
+ uint32_t cda;
+} CCW1;
+
+#define CCW_FLAG_DC 0x80
+#define CCW_FLAG_CC 0x40
+#define CCW_FLAG_SLI 0x20
+#define CCW_FLAG_SKIP 0x10
+#define CCW_FLAG_PCI 0x08
+#define CCW_FLAG_IDA 0x04
+#define CCW_FLAG_SUSPEND 0x02
+
+#define CCW_CMD_NOOP 0x03
+#define CCW_CMD_BASIC_SENSE 0x04
+#define CCW_CMD_TIC 0x08
+#define CCW_CMD_SENSE_ID 0xe4
+
+#define SCSW_FCTL_CLEAR_FUNC 0x1
+#define SCSW_FCTL_HALT_FUNC 0x2
+#define SCSW_FCTL_START_FUNC 0x4
+
+#define SCSW_ACTL_SUSP 0x1
+#define SCSW_ACTL_DEVICE_ACTIVE 0x2
+#define SCSW_ACTL_SUBCH_ACTIVE 0x4
+#define SCSW_ACTL_CLEAR_PEND 0x8
+#define SCSW_ACTL_HALT_PEND 0x10
+#define SCSW_ACTL_START_PEND 0x20
+#define SCSW_ACTL_RESUME_PEND 0x40
+
+#define SCSW_STCTL_STATUS_PEND 0x1
+#define SCSW_STCTL_SECONDARY 0x2
+#define SCSW_STCTL_PRIMARY 0x4
+#define SCSW_STCTL_INTERMEDIATE 0x8
+#define SCSW_STCTL_ALERT 0x10
+
+#define SCSW_DSTAT_ATTENTION 0x80
+#define SCSW_DSTAT_STAT_MOD 0x40
+#define SCSW_DSTAT_CU_END 0x20
+#define SCSW_DSTAT_BUSY 0x10
+#define SCSW_DSTAT_CHANNEL_END 0x08
+#define SCSW_DSTAT_DEVICE_END 0x04
+#define SCSW_DSTAT_UNIT_CHECK 0x02
+#define SCSW_DSTAT_UNIT_EXCEP 0x01
+
+#define SCSW_CSTAT_PCI 0x80
+#define SCSW_CSTAT_INCORR_LEN 0x40
+#define SCSW_CSTAT_PROG_CHECK 0x20
+#define SCSW_CSTAT_PROT_CHECK 0x10
+#define SCSW_CSTAT_DATA_CHECK 0x08
+#define SCSW_CSTAT_CHN_CTRL_CHK 0x04
+#define SCSW_CSTAT_INTF_CTRL_CHK 0x02
+#define SCSW_CSTAT_CHAIN_CHECK 0x01
+
+typedef struct CRW {
+ uint16_t zero0:1;
+ uint16_t s:1;
+ uint16_t r:1;
+ uint16_t c:1;
+ uint16_t rsc:4;
+ uint16_t a:1;
+ uint16_t zero1:1;
+ uint16_t erc:6;
+ uint16_t rsid;
+} CRW;
+
+#define CRW_ERC_INIT 0x02
+#define CRW_ERC_IPI 0x04
+
+#define CRW_RSC_SUBCH 0x3
+#define CRW_RSC_CHP 0x4
+
+/* schid disintegration */
+#define IOINST_SCHID_ONE 0x00010000
+#define IOINST_SCHID_M 0x00080000
+#define IOINST_SCHID_CSSID 0xff000000
+#define IOINST_SCHID_SSID 0x00060000
+#define IOINST_SCHID_NR 0x0000ffff
+
+int ioinst_disassemble_sch_ident(uint32_t value, int *m, int *cssid, int *ssid,
+ int *schid);
+int ioinst_handle_xsch(CPUS390XState *env, uint64_t reg1);
+int ioinst_handle_csch(CPUS390XState *env, uint64_t reg1);
+int ioinst_handle_hsch(CPUS390XState *env, uint64_t reg1);
+int ioinst_handle_msch(CPUS390XState *env, uint64_t reg1, uint32_t ipb);
+int ioinst_handle_ssch(CPUS390XState *env, uint64_t reg1, uint32_t ipb);
+int ioinst_handle_stcrw(CPUS390XState *env, uint32_t ipb);
+int ioinst_handle_stsch(CPUS390XState *env, uint64_t reg1, uint32_t ipb);
+int ioinst_handle_tsch(CPUS390XState *env, uint64_t reg1, uint32_t ipb);
+int ioinst_handle_chsc(CPUS390XState *env, uint32_t ipb);
+int ioinst_handle_tpi(CPUS390XState *env, uint32_t ipb);
+int ioinst_handle_schm(CPUS390XState *env, uint64_t reg1, uint64_t reg2,
+ uint32_t ipb);
+int ioinst_handle_rsch(CPUS390XState *env, uint64_t reg1);
+int ioinst_handle_rchp(CPUS390XState *env, uint64_t reg1);
+int ioinst_handle_sal(CPUS390XState *env, uint64_t reg1);
+
+#endif
diff --git a/target-s390x/kvm.c b/target-s390x/kvm.c
index a66ac43..b53391e 100644
--- a/target-s390x/kvm.c
+++ b/target-s390x/kvm.c
@@ -26,10 +26,13 @@
#include "qemu-common.h"
#include "qemu-timer.h"
+#include "qemu-thread.h"
#include "sysemu.h"
#include "kvm.h"
#include "cpu.h"
#include "device_tree.h"
+#include "trace.h"
+#include "ioinst.h"
/* #define DEBUG_KVM */
@@ -43,9 +46,27 @@
#define IPA0_DIAG 0x8300
#define IPA0_SIGP 0xae00
-#define IPA0_PRIV 0xb200
+#define IPA0_B2 0xb200
+#define IPA0_B9 0xb900
+#define IPA0_EB 0xeb00
#define PRIV_SCLP_CALL 0x20
+#define PRIV_CSCH 0x30
+#define PRIV_HSCH 0x31
+#define PRIV_MSCH 0x32
+#define PRIV_SSCH 0x33
+#define PRIV_STSCH 0x34
+#define PRIV_TSCH 0x35
+#define PRIV_TPI 0x36
+#define PRIV_SAL 0x37
+#define PRIV_RSCH 0x38
+#define PRIV_STCRW 0x39
+#define PRIV_STCPS 0x3a
+#define PRIV_RCHP 0x3b
+#define PRIV_SCHM 0x3c
+#define PRIV_CHSC 0x5f
+#define PRIV_SIGA 0x74
+#define PRIV_XSCH 0x76
#define DIAG_KVM_HYPERCALL 0x500
#define DIAG_KVM_BREAKPOINT 0x501
@@ -350,10 +371,120 @@ static int kvm_sclp_service_call(CPUS390XState *env, struct kvm_run *run,
return 0;
}
-static int handle_priv(CPUS390XState *env, struct kvm_run *run, uint8_t ipa1)
+static int kvm_handle_css_inst(CPUS390XState *env, struct kvm_run *run,
+ uint8_t ipa0, uint8_t ipa1, uint8_t ipb)
+{
+ int r = 0;
+ int no_cc = 0;
+
+ if (ipa0 != 0xb2) {
+ /* Not handled for now. */
+ return -1;
+ }
+ cpu_synchronize_state(env);
+ switch (ipa1) {
+ case PRIV_XSCH:
+ r = ioinst_handle_xsch(env, env->regs[1]);
+ break;
+ case PRIV_CSCH:
+ r = ioinst_handle_csch(env, env->regs[1]);
+ break;
+ case PRIV_HSCH:
+ r = ioinst_handle_hsch(env, env->regs[1]);
+ break;
+ case PRIV_MSCH:
+ r = ioinst_handle_msch(env, env->regs[1], run->s390_sieic.ipb);
+ break;
+ case PRIV_SSCH:
+ r = ioinst_handle_ssch(env, env->regs[1], run->s390_sieic.ipb);
+ break;
+ case PRIV_STCRW:
+ r = ioinst_handle_stcrw(env, run->s390_sieic.ipb);
+ break;
+ case PRIV_STSCH:
+ r = ioinst_handle_stsch(env, env->regs[1], run->s390_sieic.ipb);
+ break;
+ case PRIV_TSCH:
+ /* We should only get tsch via KVM_EXIT_S390_TSCH. */
+ fprintf(stderr, "Spurious tsch intercept\n");
+ break;
+ case PRIV_CHSC:
+ r = ioinst_handle_chsc(env, run->s390_sieic.ipb);
+ break;
+ case PRIV_TPI:
+ /* This should have been handled by kvm already. */
+ fprintf(stderr, "Spurious tpi intercept\n");
+ break;
+ case PRIV_SCHM:
+ no_cc = 1;
+ r = ioinst_handle_schm(env, env->regs[1], env->regs[2],
+ run->s390_sieic.ipb);
+ break;
+ case PRIV_RSCH:
+ r = ioinst_handle_rsch(env, env->regs[1]);
+ break;
+ case PRIV_RCHP:
+ r = ioinst_handle_rchp(env, env->regs[1]);
+ break;
+ case PRIV_STCPS:
+ /* We do not provide this instruction, it is suppressed. */
+ no_cc = 1;
+ r = 0;
+ break;
+ case PRIV_SAL:
+ no_cc = 1;
+ r = ioinst_handle_sal(env, env->regs[1]);
+ break;
+ default:
+ r = -1;
+ break;
+ }
+
+ if (r >= 0) {
+ if (!no_cc) {
+ setcc(env, r);
+ }
+ r = 0;
+ } else if (r < -1) {
+ r = 0;
+ }
+ return r;
+}
+
+static int is_ioinst(uint8_t ipa0, uint8_t ipa1, uint8_t ipb)
+{
+ int ret = 0;
+
+ switch (ipa0) {
+ case 0xb2:
+ if (((ipa1 >= 0x30) && (ipa1 <= 0x3c)) ||
+ (ipa1 == 0x5f) ||
+ (ipa1 == 0x74) ||
+ (ipa1 == 0x76)) {
+ ret = 1;
+ }
+ break;
+ case 0xb9:
+ if (ipa1 == 0x9c) {
+ ret = 1;
+ }
+ break;
+ case 0xeb:
+ if (ipb == 0x8a) {
+ ret = 1;
+ }
+ break;
+ }
+
+ return ret;
+}
+
+static int handle_priv(CPUS390XState *env, struct kvm_run *run,
+ uint8_t ipa0, uint8_t ipa1)
{
int r = 0;
uint16_t ipbh0 = (run->s390_sieic.ipb & 0xffff0000) >> 16;
+ uint8_t ipb = run->s390_sieic.ipb & 0xff;
dprintf("KVM: PRIV: %d\n", ipa1);
switch (ipa1) {
@@ -361,8 +492,16 @@ static int handle_priv(CPUS390XState *env, struct kvm_run *run, uint8_t ipa1)
r = kvm_sclp_service_call(env, run, ipbh0);
break;
default:
- dprintf("KVM: unknown PRIV: 0x%x\n", ipa1);
- r = -1;
+ if (is_ioinst(ipa0, ipa1, ipb)) {
+ r = kvm_handle_css_inst(env, run, ipa0, ipa1, ipb);
+ if (r == -1) {
+ setcc(env, 3);
+ r = 0;
+ }
+ } else {
+ dprintf("KVM: unknown PRIV: 0x%x\n", ipa1);
+ r = -1;
+ }
break;
}
@@ -500,15 +639,17 @@ static int handle_instruction(CPUS390XState *env, struct kvm_run *run)
dprintf("handle_instruction 0x%x 0x%x\n", run->s390_sieic.ipa, run->s390_sieic.ipb);
switch (ipa0) {
- case IPA0_PRIV:
- r = handle_priv(env, run, ipa1);
- break;
- case IPA0_DIAG:
- r = handle_diag(env, run, ipb_code);
- break;
- case IPA0_SIGP:
- r = handle_sigp(env, run, ipa1);
- break;
+ case IPA0_B2:
+ case IPA0_B9:
+ case IPA0_EB:
+ r = handle_priv(env, run, ipa0 >> 8, ipa1);
+ break;
+ case IPA0_DIAG:
+ r = handle_diag(env, run, ipb_code);
+ break;
+ case IPA0_SIGP:
+ r = handle_sigp(env, run, ipa1);
+ break;
}
if (r < 0) {
@@ -565,6 +706,38 @@ static int handle_intercept(CPUS390XState *env)
return r;
}
+static int handle_tsch(CPUS390XState *env, struct kvm_run *run, int dequeued,
+ uint16_t subchannel_id, uint16_t subchannel_nr,
+ uint32_t io_int_parm, uint32_t io_int_word)
+{
+ int ret;
+
+ cpu_synchronize_state(env);
+ ret = ioinst_handle_tsch(env, env->regs[1], run->s390_tsch.ipb);
+ if (ret >= 0) {
+ /* Success; set condition code. */
+ setcc(env, ret);
+ ret = 0;
+ } else if (ret < -1) {
+ /*
+ * Failure.
+ * If an I/O interrupt had been dequeued, we have to reinject it.
+ */
+ if (dequeued) {
+ uint32_t type = ((subchannel_id & 0xff00) << 24) |
+ ((subchannel_id & 0x00060) << 22) | (subchannel_nr << 16);
+
+ kvm_s390_interrupt_internal(env, type,
+ ((uint32_t)subchannel_id << 16)
+ | subchannel_nr,
+ ((uint64_t)io_int_parm << 32)
+ | io_int_word, 1);
+ }
+ ret = 0;
+ }
+ return ret;
+}
+
int kvm_arch_handle_exit(CPUS390XState *env, struct kvm_run *run)
{
int ret = 0;
@@ -576,6 +749,13 @@ int kvm_arch_handle_exit(CPUS390XState *env, struct kvm_run *run)
case KVM_EXIT_S390_RESET:
qemu_system_reset_request();
break;
+ case KVM_EXIT_S390_TSCH:
+ ret = handle_tsch(env, run, run->s390_tsch.dequeued,
+ run->s390_tsch.subchannel_id,
+ run->s390_tsch.subchannel_nr,
+ run->s390_tsch.io_int_parm,
+ run->s390_tsch.io_int_word);
+ break;
default:
fprintf(stderr, "Unknown KVM exit: %d\n", run->exit_reason);
break;
@@ -601,3 +781,48 @@ int kvm_arch_on_sigbus(int code, void *addr)
{
return 1;
}
+
+int kvm_s390_io_interrupt(CPUS390XState *env, uint16_t subchannel_id,
+ uint16_t subchannel_nr, uint32_t io_int_parm,
+ uint32_t io_int_word)
+{
+ uint32_t type;
+
+ if (!kvm_enabled()) {
+ return -EOPNOTSUPP;
+ }
+
+ type = ((subchannel_id & 0xff00) << 24) |
+ ((subchannel_id & 0x00060) << 22) | (subchannel_nr << 16);
+ kvm_s390_interrupt_internal(env, type,
+ ((uint32_t)subchannel_id << 16) | subchannel_nr,
+ ((uint64_t)io_int_parm << 32) | io_int_word, 1);
+ return 0;
+}
+
+int kvm_s390_crw_mchk(CPUS390XState *env)
+{
+ if (!kvm_enabled()) {
+ return -EOPNOTSUPP;
+ }
+
+ kvm_s390_interrupt_internal(env, KVM_S390_MCHK, 1 << 28,
+ 0x00400f1d40330000, 1);
+ return 0;
+}
+
+void kvm_s390_enable_css_support(CPUS390XState *env)
+{
+ struct kvm_enable_cap cap = {};
+ int r;
+
+ /* Activate host kernel channel subsystem support. */
+ if (kvm_enabled()) {
+ /* One CPU has to run */
+ s390_add_running_cpu(env);
+
+ cap.cap = KVM_CAP_S390_CSS_SUPPORT;
+ r = kvm_vcpu_ioctl(env, KVM_ENABLE_CAP, &cap);
+ assert(r == 0);
+ }
+}
diff --git a/target-s390x/misc_helper.c b/target-s390x/misc_helper.c
index 38d8f2a..cd4bca1 100644
--- a/target-s390x/misc_helper.c
+++ b/target-s390x/misc_helper.c
@@ -49,12 +49,12 @@ void HELPER(exception)(CPUS390XState *env, uint32_t excp)
cpu_loop_exit(env);
}
-#ifndef CONFIG_USER_ONLY
void program_interrupt(CPUS390XState *env, uint32_t code, int ilc)
{
qemu_log_mask(CPU_LOG_INT, "program interrupt at %#" PRIx64 "\n",
env->psw.addr);
+#ifndef CONFIG_USER_ONLY
if (kvm_enabled()) {
#ifdef CONFIG_KVM
kvm_s390_interrupt(env, KVM_S390_PROGRAM_INT, code);
@@ -65,8 +65,12 @@ void program_interrupt(CPUS390XState *env, uint32_t code, int ilc)
env->exception_index = EXCP_PGM;
cpu_loop_exit(env);
}
+#else
+ cpu_abort(env, "Program check %x\n", code);
+#endif
}
+#ifndef CONFIG_USER_ONLY
/* SCLP service call */
uint32_t HELPER(servc)(CPUS390XState *env, uint32_t r1, uint64_t r2)
{
--
1.7.12.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [PATCH 3/3] s390: Add new channel I/O based virtio transport.
2012-10-31 16:24 [Qemu-devel] [RFC PATCH v3 0/3] s390: channel I/O support in qemu Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 1/3] Update linux headers Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support Cornelia Huck
@ 2012-10-31 16:24 ` Cornelia Huck
2 siblings, 0 replies; 8+ messages in thread
From: Cornelia Huck @ 2012-10-31 16:24 UTC (permalink / raw)
To: KVM, linux-s390, qemu-devel
Cc: Carsten Otte, Anthony Liguori, Sebastian Ott, Marcelo Tosatti,
Heiko Carstens, Alexander Graf, Christian Borntraeger, Avi Kivity,
Martin Schwidefsky
Add a new virtio transport that uses channel commands to perform
virtio operations.
Add a new machine type s390-ccw that uses this virtio-ccw transport
and make it the default machine for s390.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
---
hw/s390-virtio.c | 282 +++++++++++----
hw/s390x/Makefile.objs | 1 +
hw/s390x/virtio-ccw.c | 904 +++++++++++++++++++++++++++++++++++++++++++++++++
hw/s390x/virtio-ccw.h | 81 +++++
4 files changed, 1195 insertions(+), 73 deletions(-)
create mode 100644 hw/s390x/virtio-ccw.c
create mode 100644 hw/s390x/virtio-ccw.h
diff --git a/hw/s390-virtio.c b/hw/s390-virtio.c
index 685cb54..0d95a2b 100644
--- a/hw/s390-virtio.c
+++ b/hw/s390-virtio.c
@@ -30,9 +30,13 @@
#include "hw/sysbus.h"
#include "kvm.h"
#include "exec-memory.h"
+#include "qemu-thread.h"
#include "hw/s390-virtio-bus.h"
#include "hw/s390x/sclp.h"
+#include "hw/s390x/css.h"
+#include "hw/s390x/virtio-ccw.h"
+#include "hw/virtio-serial.h"
//#define DEBUG_S390
@@ -47,6 +51,7 @@
#define KVM_S390_VIRTIO_NOTIFY 0
#define KVM_S390_VIRTIO_RESET 1
#define KVM_S390_VIRTIO_SET_STATUS 2
+#define KVM_S390_VIRTIO_CCW_NOTIFY 3
#define KERN_IMAGE_START 0x010000UL
#define KERN_PARM_AREA 0x010480UL
@@ -63,6 +68,7 @@
static VirtIOS390Bus *s390_bus;
static S390CPU **ipi_states;
+VirtioCcwBus *ccw_bus;
S390CPU *s390_cpu_addr2state(uint16_t cpu_addr)
{
@@ -76,15 +82,21 @@ S390CPU *s390_cpu_addr2state(uint16_t cpu_addr)
int s390_virtio_hypercall(CPUS390XState *env, uint64_t mem, uint64_t hypercall)
{
int r = 0, i;
+ int cssid, ssid, schid, m;
+ SubchDev *sch;
dprintf("KVM hypercall: %ld\n", hypercall);
switch (hypercall) {
case KVM_S390_VIRTIO_NOTIFY:
if (mem > ram_size) {
- VirtIOS390Device *dev = s390_virtio_bus_find_vring(s390_bus,
- mem, &i);
- if (dev) {
- virtio_queue_notify(dev->vdev, i);
+ if (s390_bus) {
+ VirtIOS390Device *dev = s390_virtio_bus_find_vring(s390_bus,
+ mem, &i);
+ if (dev) {
+ virtio_queue_notify(dev->vdev, i);
+ } else {
+ r = -EINVAL;
+ }
} else {
r = -EINVAL;
}
@@ -93,28 +105,49 @@ int s390_virtio_hypercall(CPUS390XState *env, uint64_t mem, uint64_t hypercall)
}
break;
case KVM_S390_VIRTIO_RESET:
- {
- VirtIOS390Device *dev;
-
- dev = s390_virtio_bus_find_mem(s390_bus, mem);
- virtio_reset(dev->vdev);
- stb_phys(dev->dev_offs + VIRTIO_DEV_OFFS_STATUS, 0);
- s390_virtio_device_sync(dev);
- s390_virtio_reset_idx(dev);
+ if (s390_bus) {
+ VirtIOS390Device *dev;
+
+ dev = s390_virtio_bus_find_mem(s390_bus, mem);
+ virtio_reset(dev->vdev);
+ stb_phys(dev->dev_offs + VIRTIO_DEV_OFFS_STATUS, 0);
+ s390_virtio_device_sync(dev);
+ s390_virtio_reset_idx(dev);
+ } else {
+ r = -EINVAL;
+ }
break;
- }
case KVM_S390_VIRTIO_SET_STATUS:
- {
- VirtIOS390Device *dev;
+ if (s390_bus) {
+ VirtIOS390Device *dev;
- dev = s390_virtio_bus_find_mem(s390_bus, mem);
- if (dev) {
- s390_virtio_device_update_status(dev);
+ dev = s390_virtio_bus_find_mem(s390_bus, mem);
+ if (dev) {
+ s390_virtio_device_update_status(dev);
+ } else {
+ r = -EINVAL;
+ }
} else {
r = -EINVAL;
}
break;
- }
+ case KVM_S390_VIRTIO_CCW_NOTIFY:
+ if (ccw_bus) {
+ if (ioinst_disassemble_sch_ident(env->regs[2], &m, &cssid, &ssid,
+ &schid)) {
+ r = -EINVAL;
+ } else {
+ sch = css_find_subch(m, cssid, ssid, schid);
+ if (sch && css_subch_visible(sch)) {
+ virtio_queue_notify(virtio_ccw_get_vdev(sch), env->regs[3]);
+ } else {
+ r = -EINVAL;
+ }
+ }
+ } else {
+ r = -EINVAL;
+ }
+ break;
default:
r = -EINVAL;
break;
@@ -151,60 +184,12 @@ unsigned s390_del_running_cpu(CPUS390XState *env)
return s390_running_cpus;
}
-/* PC hardware initialisation */
-static void s390_init(QEMUMachineInitArgs *args)
+static CPUS390XState *s390_init_cpus(const char *cpu_model,
+ uint8_t *storage_keys)
{
- ram_addr_t my_ram_size = args->ram_size;
- ram_addr_t ram_size = args->ram_size;
- const char *cpu_model = args->cpu_model;
- const char *kernel_filename = args->kernel_filename;
- const char *kernel_cmdline = args->kernel_cmdline;
- const char *initrd_filename = args->initrd_filename;
CPUS390XState *env = NULL;
- MemoryRegion *sysmem = get_system_memory();
- MemoryRegion *ram = g_new(MemoryRegion, 1);
- ram_addr_t kernel_size = 0;
- ram_addr_t initrd_offset;
- ram_addr_t initrd_size = 0;
- int shift = 0;
- uint8_t *storage_keys;
- void *virtio_region;
- hwaddr virtio_region_len;
- hwaddr virtio_region_start;
int i;
- /* s390x ram size detection needs a 16bit multiplier + an increment. So
- guests > 64GB can be specified in 2MB steps etc. */
- while ((my_ram_size >> (20 + shift)) > 65535) {
- shift++;
- }
- my_ram_size = my_ram_size >> (20 + shift) << (20 + shift);
-
- /* lets propagate the changed ram size into the global variable. */
- ram_size = my_ram_size;
-
- /* get a BUS */
- s390_bus = s390_virtio_bus_init(&my_ram_size);
- s390_sclp_init();
-
- /* allocate RAM */
- memory_region_init_ram(ram, "s390.ram", my_ram_size);
- vmstate_register_ram_global(ram);
- memory_region_add_subregion(sysmem, 0, ram);
-
- /* clear virtio region */
- virtio_region_len = my_ram_size - ram_size;
- virtio_region_start = ram_size;
- virtio_region = cpu_physical_memory_map(virtio_region_start,
- &virtio_region_len, true);
- memset(virtio_region, 0, virtio_region_len);
- cpu_physical_memory_unmap(virtio_region, virtio_region_len, 1,
- virtio_region_len);
-
- /* allocate storage keys */
- storage_keys = g_malloc0(my_ram_size / TARGET_PAGE_SIZE);
-
- /* init CPUs */
if (cpu_model == NULL) {
cpu_model = "host";
}
@@ -225,6 +210,17 @@ static void s390_init(QEMUMachineInitArgs *args)
tmp_env->exception_index = EXCP_HLT;
tmp_env->storage_keys = storage_keys;
}
+ return env;
+}
+
+static void s390_set_up_kernel(CPUS390XState *env,
+ const char *kernel_filename,
+ const char *kernel_cmdline,
+ const char *initrd_filename)
+{
+ ram_addr_t kernel_size = 0;
+ ram_addr_t initrd_offset;
+ ram_addr_t initrd_size = 0;
/* One CPU has to run */
s390_add_running_cpu(env);
@@ -297,8 +293,13 @@ static void s390_init(QEMUMachineInitArgs *args)
strlen(kernel_cmdline) + 1);
}
- /* Create VirtIO network adapters */
- for(i = 0; i < nb_nics; i++) {
+}
+
+static void s390_create_virtio_net(BusState *bus, const char *name)
+{
+ int i;
+
+ for (i = 0; i < nb_nics; i++) {
NICInfo *nd = &nd_table[i];
DeviceState *dev;
@@ -311,7 +312,7 @@ static void s390_init(QEMUMachineInitArgs *args)
exit(1);
}
- dev = qdev_create((BusState *)s390_bus, "virtio-net-s390");
+ dev = qdev_create(bus, name);
qdev_set_nic_properties(dev, nd);
qdev_init_nofail(dev);
}
@@ -332,6 +333,64 @@ static void s390_init(QEMUMachineInitArgs *args)
}
}
+/* PC hardware initialisation */
+static void s390_init(QEMUMachineInitArgs *args)
+{
+ ram_addr_t my_ram_size = args->ram_size;
+ ram_addr_t ram_size = args->ram_size;
+ const char *cpu_model = args->cpu_model;
+ const char *kernel_filename = args->kernel_filename;
+ const char *kernel_cmdline = args->kernel_cmdline;
+ const char *initrd_filename = args->initrd_filename;
+ CPUS390XState *env = NULL;
+ MemoryRegion *sysmem = get_system_memory();
+ MemoryRegion *ram = g_new(MemoryRegion, 1);
+ int shift = 0;
+ uint8_t *storage_keys;
+ void *virtio_region;
+ hwaddr virtio_region_len;
+ hwaddr virtio_region_start;
+
+ /* The storage increment size is a multiple of 1M and is a power of 2.
+ * The number of storage increments must be 512 or fewer. */
+ while ((my_ram_size >> (20 + shift)) > 512) {
+ shift++;
+ }
+ my_ram_size = my_ram_size >> (20 + shift) << (20 + shift);
+
+ /* lets propagate the changed ram size into the global variable. */
+ ram_size = my_ram_size;
+
+ /* get a BUS */
+ s390_bus = s390_virtio_bus_init(&my_ram_size);
+
+ /* allocate RAM */
+ memory_region_init_ram(ram, "s390.ram", my_ram_size);
+ vmstate_register_ram_global(ram);
+ memory_region_add_subregion(sysmem, 0, ram);
+
+ /* clear virtio region */
+ virtio_region_len = my_ram_size - ram_size;
+ virtio_region_start = ram_size;
+ virtio_region = cpu_physical_memory_map(virtio_region_start,
+ &virtio_region_len, true);
+ memset(virtio_region, 0, virtio_region_len);
+ cpu_physical_memory_unmap(virtio_region, virtio_region_len, 1,
+ virtio_region_len);
+
+ /* allocate storage keys */
+ storage_keys = g_malloc0(my_ram_size / TARGET_PAGE_SIZE);
+
+ /* init CPUs */
+ env = s390_init_cpus(cpu_model, storage_keys);
+
+ s390_set_up_kernel(env, kernel_filename, kernel_cmdline, initrd_filename);
+
+ /* Create VirtIO network adapters */
+ s390_create_virtio_net((BusState *)s390_bus, "virtio-net-s390");
+
+}
+
static QEMUMachine s390_machine = {
.name = "s390-virtio",
.alias = "s390",
@@ -344,7 +403,6 @@ static QEMUMachine s390_machine = {
.no_sdcard = 1,
.use_virtcon = 1,
.max_cpus = 255,
- .is_default = 1,
};
static void s390_machine_init(void)
@@ -353,3 +411,81 @@ static void s390_machine_init(void)
}
machine_init(s390_machine_init);
+
+static void ccw_init(QEMUMachineInitArgs *args)
+{
+ ram_addr_t my_ram_size = args->ram_size;
+ ram_addr_t ram_size = args->ram_size;
+ const char *cpu_model = args->cpu_model;
+ const char *kernel_filename = args->kernel_filename;
+ const char *kernel_cmdline = args->kernel_cmdline;
+ const char *initrd_filename = args->initrd_filename;
+ CPUS390XState *env = NULL;
+ MemoryRegion *sysmem = get_system_memory();
+ MemoryRegion *ram = g_new(MemoryRegion, 1);
+ int shift = 0;
+ uint8_t *storage_keys;
+ int ret;
+
+ /* The storage increment size is a multiple of 1M and is a power of 2.
+ * The number of storage increments must be 512 or fewer. */
+ while ((my_ram_size >> (20 + shift)) > 512) {
+ shift++;
+ }
+ my_ram_size = my_ram_size >> (20 + shift) << (20 + shift);
+
+ /* lets propagate the changed ram size into the global variable. */
+ ram_size = my_ram_size;
+
+ /* get a BUS */
+ ccw_bus = virtio_ccw_bus_init();
+
+ /* allocate RAM */
+ memory_region_init_ram(ram, "s390.ram", my_ram_size);
+ vmstate_register_ram_global(ram);
+ memory_region_add_subregion(sysmem, 0, ram);
+
+ /* allocate storage keys */
+ storage_keys = g_malloc0(my_ram_size / TARGET_PAGE_SIZE);
+
+ /* init CPUs */
+ env = s390_init_cpus(cpu_model, storage_keys);
+
+ kvm_s390_enable_css_support(env);
+
+ /*
+ * Create virtual css and set it as default so that non mcss-e
+ * enabled guests only see virtio devices.
+ */
+ ret = css_create_css_image(VIRTUAL_CSSID, true);
+ assert(ret == 0);
+
+
+ s390_set_up_kernel(env, kernel_filename, kernel_cmdline, initrd_filename);
+
+ /* Create VirtIO network adapters */
+ s390_create_virtio_net((BusState *)ccw_bus, "virtio-net-ccw");
+
+}
+
+static QEMUMachine ccw_machine = {
+ .name = "s390-ccw-virtio",
+ .alias = "s390-ccw",
+ .desc = "VirtIO-ccw based S390 machine",
+ .init = ccw_init,
+ .no_cdrom = 1,
+ .no_floppy = 1,
+ .no_serial = 1,
+ .no_parallel = 1,
+ .no_sdcard = 1,
+ .use_virtcon = 1,
+ .max_cpus = 255,
+ .is_default = 1,
+};
+
+static void ccw_machine_init(void)
+{
+ qemu_register_machine(&ccw_machine);
+}
+
+machine_init(ccw_machine_init);
diff --git a/hw/s390x/Makefile.objs b/hw/s390x/Makefile.objs
index 378b099..d408558 100644
--- a/hw/s390x/Makefile.objs
+++ b/hw/s390x/Makefile.objs
@@ -5,3 +5,4 @@ obj-y += sclp.o
obj-y += event-facility.o
obj-y += sclpquiesce.o sclpconsole.o
obj-y += css.o
+obj-y += virtio-ccw.o
diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
new file mode 100644
index 0000000..680254e
--- /dev/null
+++ b/hw/s390x/virtio-ccw.c
@@ -0,0 +1,904 @@
+/*
+ * virtio ccw target implementation
+ *
+ * Copyright 2012 IBM Corp.
+ * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#include <hw/hw.h>
+#include "block.h"
+#include "blockdev.h"
+#include "sysemu.h"
+#include "net.h"
+#include "monitor.h"
+#include "qemu-thread.h"
+#include "hw/virtio.h"
+#include "hw/virtio-serial.h"
+#include "hw/virtio-net.h"
+#include "hw/sysbus.h"
+#include "bitops.h"
+
+#include "ioinst.h"
+#include "css.h"
+#include "virtio-ccw.h"
+
+static const TypeInfo virtio_ccw_bus_info = {
+ .name = TYPE_VIRTIO_CCW_BUS,
+ .parent = TYPE_BUS,
+ .instance_size = sizeof(VirtioCcwBus),
+};
+
+static const VirtIOBindings virtio_ccw_bindings;
+
+VirtIODevice *virtio_ccw_get_vdev(SubchDev *sch)
+{
+ VirtIODevice *vdev = NULL;
+
+ if (sch->driver_data) {
+ vdev = ((VirtioCcwData *)sch->driver_data)->vdev;
+ }
+ return vdev;
+}
+
+static void virtio_ccw_reset_subchannels(void *opaque)
+{
+ VirtioCcwBus *bus = opaque;
+ BusChild *kid;
+ VirtioCcwData *data;
+
+ QTAILQ_FOREACH(kid, &bus->bus.children, sibling) {
+ data = (VirtioCcwData *)kid->child;
+ virtio_reset(data->vdev);
+ css_reset_sch(data->sch);
+ }
+ css_reset();
+}
+
+VirtioCcwBus *virtio_ccw_bus_init(void)
+{
+ VirtioCcwBus *cbus;
+ BusState *bus;
+ DeviceState *dev;
+
+ /* Create bridge device */
+ dev = qdev_create(NULL, "virtio-ccw-bridge");
+ qdev_init_nofail(dev);
+
+ /* Create bus on bridge device */
+ bus = qbus_create(TYPE_VIRTIO_CCW_BUS, dev, "virtio-ccw");
+ cbus = DO_UPCAST(VirtioCcwBus, bus, bus);
+
+ /* Enable hotplugging */
+ bus->allow_hotplug = 1;
+
+ qemu_register_reset(virtio_ccw_reset_subchannels, cbus);
+ return cbus;
+}
+
+/* Communication blocks used by several channel commands. */
+typedef struct VqInfoBlock {
+ uint64_t queue;
+ uint32_t align;
+ uint16_t index;
+ uint16_t num;
+} QEMU_PACKED VqInfoBlock;
+
+typedef struct VqConfigBlock {
+ uint16_t index;
+ uint16_t num_max;
+} QEMU_PACKED VqConfigBlock;
+
+typedef struct VirtioFeatDesc {
+ uint32_t features;
+ uint8_t index;
+} QEMU_PACKED VirtioFeatDesc;
+
+/* Specify where the virtqueues for the subchannel are in guest memory. */
+static int virtio_ccw_set_vqs(SubchDev *sch, uint64_t addr, uint32_t align,
+ uint16_t index, uint16_t num)
+{
+ VirtioCcwData *data = sch->driver_data;
+
+ if (index > VIRTIO_PCI_QUEUE_MAX) {
+ return -EINVAL;
+ }
+
+ /* Current code in virtio.c relies on 4K alignment. */
+ if (addr && (align != 4096)) {
+ return -EINVAL;
+ }
+
+ if (!data) {
+ return -EINVAL;
+ }
+
+ virtio_queue_set_addr(data->vdev, index, addr);
+ if (!addr) {
+ virtio_queue_set_vector(data->vdev, index, 0);
+ } else {
+ /* Fail if we don't have a big enough queue. */
+ /* TODO: Add interface to handle vring.num changing */
+ if (virtio_queue_get_num(data->vdev, index) > num) {
+ return -EINVAL;
+ }
+ virtio_queue_set_vector(data->vdev, index, index);
+ }
+ /* tell notify handler in case of config change */
+ data->vdev->config_vector = VIRTIO_PCI_QUEUE_MAX;
+ return 0;
+}
+
+static int virtio_ccw_cb(SubchDev *sch, CCW1 *ccw)
+{
+ int ret;
+ VqInfoBlock info;
+ uint8_t status;
+ VirtioFeatDesc features;
+ void *config;
+ hwaddr indicators;
+ VqConfigBlock vq_config;
+ VirtioCcwData *data = sch->driver_data;
+ bool check_len;
+ int len;
+
+ if (!ccw) {
+ return -EIO;
+ }
+
+ if (!data) {
+ return -EINVAL;
+ }
+
+ check_len = !((ccw->flags & CCW_FLAG_SLI) && !(ccw->flags & CCW_FLAG_DC));
+
+ /* Look at the command. */
+ switch (ccw->cmd_code) {
+ case CCW_CMD_SET_VQ:
+ if (check_len) {
+ if (ccw->count != sizeof(info)) {
+ ret = -EINVAL;
+ break;
+ }
+ } else if (ccw->count < sizeof(info)) {
+ /* Can't execute command. */
+ ret = -EINVAL;
+ break;
+ }
+ if (!qemu_get_ram_ptr(ccw->cda)) {
+ ret = -EFAULT;
+ } else {
+ info.queue = ldq_phys(ccw->cda);
+ info.align = ldl_phys(ccw->cda + sizeof(info.queue));
+ info.index = lduw_phys(ccw->cda + sizeof(info.queue)
+ + sizeof(info.align));
+ info.num = lduw_phys(ccw->cda + sizeof(info.queue)
+ + sizeof(info.align)
+ + sizeof(info.index));
+ ret = virtio_ccw_set_vqs(sch, info.queue, info.align, info.index,
+ info.num);
+ sch->curr_status.scsw.count = 0;
+ }
+ break;
+ case CCW_CMD_VDEV_RESET:
+ virtio_reset(data->vdev);
+ ret = 0;
+ break;
+ case CCW_CMD_READ_FEAT:
+ if (check_len) {
+ if (ccw->count != sizeof(features)) {
+ ret = -EINVAL;
+ break;
+ }
+ } else if (ccw->count < sizeof(features)) {
+ /* Can't execute command. */
+ ret = -EINVAL;
+ break;
+ }
+ if (!qemu_get_ram_ptr(ccw->cda)) {
+ ret = -EFAULT;
+ } else {
+ features.index = ldub_phys(ccw->cda + sizeof(features.features));
+ if (features.index < ARRAY_SIZE(data->host_features)) {
+ features.features = data->host_features[features.index];
+ } else {
+ /* Return zeroes if the guest supports more feature bits. */
+ features.features = 0;
+ }
+ stl_le_phys(ccw->cda, features.features);
+ sch->curr_status.scsw.count = ccw->count - sizeof(features);
+ ret = 0;
+ }
+ break;
+ case CCW_CMD_WRITE_FEAT:
+ if (check_len) {
+ if (ccw->count != sizeof(features)) {
+ ret = -EINVAL;
+ break;
+ }
+ } else if (ccw->count < sizeof(features)) {
+ /* Can't execute command. */
+ ret = -EINVAL;
+ break;
+ }
+ if (!qemu_get_ram_ptr(ccw->cda)) {
+ ret = -EFAULT;
+ } else {
+ features.index = ldub_phys(ccw->cda + sizeof(features.features));
+ features.features = ldl_le_phys(ccw->cda);
+ if (features.index < ARRAY_SIZE(data->host_features)) {
+ if (data->vdev->set_features) {
+ data->vdev->set_features(data->vdev, features.features);
+ }
+ data->vdev->guest_features = features.features;
+ } else {
+ /*
+ * If the guest supports more feature bits, assert that it
+ * passes us zeroes for those we don't support.
+ */
+ if (features.features) {
+ fprintf(stderr, "Guest bug: features[%i]=%x (expected 0)\n",
+ features.index, features.features);
+ /* XXX: do a unit check here? */
+ }
+ }
+ sch->curr_status.scsw.count = ccw->count - sizeof(features);
+ ret = 0;
+ }
+ break;
+ case CCW_CMD_READ_CONF:
+ if (check_len) {
+ if (ccw->count > data->vdev->config_len) {
+ ret = -EINVAL;
+ break;
+ }
+ }
+ len = MIN(ccw->count, data->vdev->config_len);
+ if (!qemu_get_ram_ptr(ccw->cda)) {
+ ret = -EFAULT;
+ } else {
+ data->vdev->get_config(data->vdev, data->vdev->config);
+ cpu_physical_memory_write(ccw->cda, data->vdev->config, len);
+ sch->curr_status.scsw.count = ccw->count - len;
+ ret = 0;
+ }
+ break;
+ case CCW_CMD_WRITE_CONF:
+ if (check_len) {
+ if (ccw->count > data->vdev->config_len) {
+ ret = -EINVAL;
+ break;
+ }
+ }
+ len = MIN(ccw->count, data->vdev->config_len);
+ config = qemu_get_ram_ptr(ccw->cda);
+ if (!config) {
+ ret = -EFAULT;
+ } else {
+ memcpy(data->vdev->config, config, len);
+ if (data->vdev->set_config) {
+ data->vdev->set_config(data->vdev, data->vdev->config);
+ }
+ sch->curr_status.scsw.count = ccw->count - len;
+ ret = 0;
+ }
+ break;
+ case CCW_CMD_WRITE_STATUS:
+ if (check_len) {
+ if (ccw->count != sizeof(status)) {
+ ret = -EINVAL;
+ break;
+ }
+ } else if (ccw->count < sizeof(status)) {
+ /* Can't execute command. */
+ ret = -EINVAL;
+ break;
+ }
+ if (!qemu_get_ram_ptr(ccw->cda)) {
+ ret = -EFAULT;
+ } else {
+ status = ldub_phys(ccw->cda);
+ virtio_set_status(data->vdev, status);
+ sch->curr_status.scsw.count = ccw->count - sizeof(status);
+ ret = 0;
+ }
+ break;
+ case CCW_CMD_SET_IND:
+ if (check_len) {
+ if (ccw->count != sizeof(indicators)) {
+ ret = -EINVAL;
+ break;
+ }
+ } else if (ccw->count < sizeof(indicators)) {
+ /* Can't execute command. */
+ ret = -EINVAL;
+ break;
+ }
+ indicators = ldq_phys(ccw->cda);
+ if (!indicators) {
+ ret = -EFAULT;
+ } else {
+ data->indicators = indicators;
+ sch->curr_status.scsw.count = ccw->count - sizeof(indicators);
+ ret = 0;
+ }
+ break;
+ case CCW_CMD_SET_CONF_IND:
+ if (check_len) {
+ if (ccw->count != sizeof(indicators)) {
+ ret = -EINVAL;
+ break;
+ }
+ } else if (ccw->count < sizeof(indicators)) {
+ /* Can't execute command. */
+ ret = -EINVAL;
+ break;
+ }
+ indicators = ldq_phys(ccw->cda);
+ if (!indicators) {
+ ret = -EFAULT;
+ } else {
+ data->indicators2 = indicators;
+ sch->curr_status.scsw.count = ccw->count - sizeof(indicators);
+ ret = 0;
+ }
+ break;
+ case CCW_CMD_READ_VQ_CONF:
+ if (check_len) {
+ if (ccw->count != sizeof(vq_config)) {
+ ret = -EINVAL;
+ break;
+ }
+ } else if (ccw->count < sizeof(vq_config)) {
+ /* Can't execute command. */
+ ret = -EINVAL;
+ break;
+ }
+ if (!qemu_get_ram_ptr(ccw->cda)) {
+ ret = -EFAULT;
+ } else {
+ vq_config.index = lduw_phys(ccw->cda);
+ vq_config.num_max = virtio_queue_get_num(data->vdev,
+ vq_config.index);
+ stw_phys(ccw->cda + sizeof(vq_config.index), vq_config.num_max);
+ sch->curr_status.scsw.count = ccw->count - sizeof(vq_config);
+ ret = 0;
+ }
+ break;
+ default:
+ ret = -EOPNOTSUPP;
+ break;
+ }
+ return ret;
+}
+
+static int virtio_ccw_device_init(VirtioCcwData *dev, VirtIODevice *vdev)
+{
+ unsigned int cssid = 0;
+ unsigned int ssid = 0;
+ unsigned int schid;
+ unsigned int devno;
+ bool have_devno = false;
+ bool found = false;
+ SubchDev *sch;
+ int ret;
+ int num;
+
+ sch = g_malloc0(sizeof(SubchDev));
+
+ sch->driver_data = dev;
+ dev->sch = sch;
+
+ dev->vdev = vdev;
+ dev->indicators = 0;
+
+ /* Initialize subchannel structure. */
+ sch->channel_prog = NULL;
+ sch->last_cmd = NULL;
+ sch->orb = NULL;
+ /*
+ * Use a device number if provided. Otherwise, fall back to subchannel
+ * number.
+ */
+ if (dev->bus_id) {
+ num = sscanf(dev->bus_id, "%x.%x.%04x", &cssid, &ssid, &devno);
+ if (num == 3) {
+ if ((cssid > MAX_CSSID) || (ssid > MAX_SSID)) {
+ ret = -EINVAL;
+ error_report("Invalid cssid or ssid: cssid %x, ssid %x",
+ cssid, ssid);
+ goto out_err;
+ }
+ /* Enforce use of virtual cssid. */
+ if (cssid != VIRTUAL_CSSID) {
+ ret = -EINVAL;
+ error_report("cssid %x not valid for virtio devices", cssid);
+ goto out_err;
+ }
+ if (css_devno_used(cssid, ssid, devno)) {
+ ret = -EEXIST;
+ error_report("Device %x.%x.%04x already exists", cssid, ssid,
+ devno);
+ goto out_err;
+ }
+ sch->cssid = cssid;
+ sch->ssid = ssid;
+ sch->devno = devno;
+ have_devno = true;
+ } else {
+ ret = -EINVAL;
+ error_report("Malformed devno parameter '%s'", dev->bus_id);
+ goto out_err;
+ }
+ }
+
+ /* Find the next free id. */
+ if (have_devno) {
+ for (schid = 0; schid <= MAX_SCHID; schid++) {
+ if (!css_find_subch(1, cssid, ssid, schid)) {
+ sch->schid = schid;
+ css_subch_assign(cssid, ssid, schid, devno, sch);
+ found = true;
+ break;
+ }
+ }
+ if (!found) {
+ ret = -ENODEV;
+ error_report("No free subchannel found for %x.%x.%04x", cssid, ssid,
+ devno);
+ goto out_err;
+ }
+ } else {
+ cssid = VIRTUAL_CSSID;
+ for (ssid = 0; ssid <= MAX_SSID; ssid++) {
+ for (schid = 0; schid <= MAX_SCHID; schid++) {
+ if (!css_find_subch(1, cssid, ssid, schid)) {
+ sch->cssid = cssid;
+ sch->ssid = ssid;
+ sch->schid = schid;
+ devno = schid;
+ /*
+ * If the devno is already taken, look further in this
+ * subchannel set.
+ */
+ while (css_devno_used(cssid, ssid, devno)) {
+ if (devno == MAX_SCHID) {
+ devno = 0;
+ } else if (devno == schid - 1) {
+ ret = -ENODEV;
+ error_report("No free devno found");
+ goto out_err;
+ } else {
+ devno++;
+ }
+ }
+ sch->devno = devno;
+ css_subch_assign(cssid, ssid, schid, devno, sch);
+ found = true;
+ break;
+ }
+ }
+ if (found) {
+ break;
+ }
+ }
+ if (!found) {
+ ret = -ENODEV;
+ error_report("Virtual channel subsystem is full!");
+ goto out_err;
+ }
+ }
+
+ /* Build initial schib. */
+ css_sch_build_virtual_schib(sch, 0, VIRTIO_CCW_CHPID_TYPE);
+
+ sch->ccw_cb = virtio_ccw_cb;
+
+ /* Build senseid data. */
+ memset(&sch->id, 0, sizeof(SenseId));
+ sch->id.reserved = 0xff;
+ sch->id.cu_type = VIRTIO_CCW_CU_TYPE;
+ sch->id.cu_model = dev->vdev->device_id;
+
+ virtio_bind_device(vdev, &virtio_ccw_bindings, dev);
+ /* Only the first 32 feature bits are used. */
+ dev->host_features[0] = vdev->get_features(vdev, dev->host_features[0]);
+ dev->host_features[0] |= 0x1 << VIRTIO_F_NOTIFY_ON_EMPTY;
+ dev->host_features[0] |= 0x1 << VIRTIO_F_BAD_FEATURE;
+
+ css_generate_sch_crws(sch->cssid, sch->ssid, sch->schid,
+ dev->qdev.hotplugged, 1);
+ return 0;
+
+out_err:
+ dev->sch = NULL;
+ g_free(sch);
+ return ret;
+}
+
+static int virtio_ccw_exit(VirtioCcwData *dev)
+{
+ SubchDev *sch = dev->sch;
+
+ if (sch) {
+ css_subch_assign(sch->cssid, sch->ssid, sch->schid, sch->devno, NULL);
+ g_free(sch);
+ }
+ dev->indicators = 0;
+ return 0;
+}
+
+static int virtio_ccw_net_init(VirtioCcwData *dev)
+{
+ VirtIODevice *vdev;
+
+ vdev = virtio_net_init((DeviceState *)dev, &dev->nic, &dev->net);
+ if (!vdev) {
+ return -1;
+ }
+
+ return virtio_ccw_device_init(dev, vdev);
+}
+
+static int virtio_ccw_net_exit(VirtioCcwData *dev)
+{
+ virtio_net_exit(dev->vdev);
+ return virtio_ccw_exit(dev);
+}
+
+static int virtio_ccw_blk_init(VirtioCcwData *dev)
+{
+ VirtIODevice *vdev;
+
+ vdev = virtio_blk_init((DeviceState *)dev, &dev->blk);
+ if (!vdev) {
+ return -1;
+ }
+
+ return virtio_ccw_device_init(dev, vdev);
+}
+
+static int virtio_ccw_blk_exit(VirtioCcwData *dev)
+{
+ virtio_blk_exit(dev->vdev);
+ blockdev_mark_auto_del(dev->blk.conf.bs);
+ return virtio_ccw_exit(dev);
+}
+
+static int virtio_ccw_serial_init(VirtioCcwData *dev)
+{
+ VirtioCcwBus *bus;
+ VirtIODevice *vdev;
+ int r;
+
+ bus = DO_UPCAST(VirtioCcwBus, bus, dev->qdev.parent_bus);
+
+ vdev = virtio_serial_init((DeviceState *)dev, &dev->serial);
+ if (!vdev) {
+ return -1;
+ }
+
+ r = virtio_ccw_device_init(dev, vdev);
+ if (!r) {
+ bus->console = dev;
+ }
+
+ return r;
+}
+
+static int virtio_ccw_serial_exit(VirtioCcwData *dev)
+{
+ VirtioCcwBus *bus;
+
+ bus = DO_UPCAST(VirtioCcwBus, bus, dev->qdev.parent_bus);
+ bus->console = NULL;
+ virtio_serial_exit(dev->vdev);
+ return virtio_ccw_exit(dev);
+}
+
+static int virtio_ccw_balloon_init(VirtioCcwData *dev)
+{
+ VirtIODevice *vdev;
+
+ vdev = virtio_balloon_init((DeviceState *)dev);
+ if (!vdev) {
+ return -1;
+ }
+
+ return virtio_ccw_device_init(dev, vdev);
+}
+
+static int virtio_ccw_balloon_exit(VirtioCcwData *dev)
+{
+ virtio_balloon_exit(dev->vdev);
+ return virtio_ccw_exit(dev);
+}
+
+static int virtio_ccw_scsi_init(VirtioCcwData *dev)
+{
+ VirtIODevice *vdev;
+
+ vdev = virtio_scsi_init((DeviceState *)dev, &dev->scsi);
+ if (!vdev) {
+ return -1;
+ }
+
+ return virtio_ccw_device_init(dev, vdev);
+}
+
+static int virtio_ccw_scsi_exit(VirtioCcwData *dev)
+{
+ virtio_scsi_exit(dev->vdev);
+ return virtio_ccw_exit(dev);
+}
+
+VirtioCcwData *virtio_ccw_bus_console(VirtioCcwBus *bus)
+{
+ return bus->console;
+}
+
+static void virtio_ccw_notify(void *opaque, uint16_t vector)
+{
+ VirtioCcwData *dev = opaque;
+ SubchDev *sch = dev->sch;
+ uint64_t indicators;
+
+ if (vector >= 128) {
+ return;
+ }
+
+ if (vector < VIRTIO_PCI_QUEUE_MAX) {
+ indicators = ldq_phys(dev->indicators);
+ set_bit(vector, &indicators);
+ stq_phys(dev->indicators, indicators);
+ } else {
+ vector = 0;
+ indicators = ldq_phys(dev->indicators2);
+ set_bit(vector, &indicators);
+ stq_phys(dev->indicators2, indicators);
+ }
+
+ css_conditional_io_interrupt(sch);
+
+}
+
+static unsigned virtio_ccw_get_features(void *opaque)
+{
+ VirtioCcwData *dev = opaque;
+
+ /* Only the first 32 feature bits are used. */
+ return dev->host_features[0];
+}
+
+/**************** Virtio-ccw Bus Device Descriptions *******************/
+
+static const VirtIOBindings virtio_ccw_bindings = {
+ .notify = virtio_ccw_notify,
+ .get_features = virtio_ccw_get_features,
+};
+
+static Property virtio_ccw_net_properties[] = {
+ DEFINE_PROP_STRING("devno", VirtioCcwData, bus_id),
+ DEFINE_VIRTIO_NET_FEATURES(VirtioCcwData, host_features[0]),
+ DEFINE_NIC_PROPERTIES(VirtioCcwData, nic),
+ DEFINE_PROP_UINT32("x-txtimer", VirtioCcwData,
+ net.txtimer, TX_TIMER_INTERVAL),
+ DEFINE_PROP_INT32("x-txburst", VirtioCcwData,
+ net.txburst, TX_BURST),
+ DEFINE_PROP_STRING("tx", VirtioCcwData, net.tx),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_ccw_net_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtIOCCWDeviceClass *k = VIRTIO_CCW_DEVICE_CLASS(klass);
+
+ k->init = virtio_ccw_net_init;
+ k->exit = virtio_ccw_net_exit;
+ dc->props = virtio_ccw_net_properties;
+}
+
+static TypeInfo virtio_ccw_net = {
+ .name = "virtio-net-ccw",
+ .parent = TYPE_VIRTIO_CCW_DEVICE,
+ .instance_size = sizeof(VirtioCcwData),
+ .class_init = virtio_ccw_net_class_init,
+};
+
+static Property virtio_ccw_blk_properties[] = {
+ DEFINE_PROP_STRING("devno", VirtioCcwData, bus_id),
+ DEFINE_BLOCK_PROPERTIES(VirtioCcwData, blk.conf),
+ DEFINE_PROP_STRING("serial", VirtioCcwData, blk.serial),
+#ifdef __linux__
+ DEFINE_PROP_BIT("scsi", VirtioCcwData, blk.scsi, 0, true),
+#endif
+ DEFINE_VIRTIO_BLK_FEATURES(VirtioCcwData, host_features[0]),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_ccw_blk_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtIOCCWDeviceClass *k = VIRTIO_CCW_DEVICE_CLASS(klass);
+
+ k->init = virtio_ccw_blk_init;
+ k->exit = virtio_ccw_blk_exit;
+ dc->props = virtio_ccw_blk_properties;
+}
+
+static TypeInfo virtio_ccw_blk = {
+ .name = "virtio-blk-ccw",
+ .parent = TYPE_VIRTIO_CCW_DEVICE,
+ .instance_size = sizeof(VirtioCcwData),
+ .class_init = virtio_ccw_blk_class_init,
+};
+
+static Property virtio_ccw_serial_properties[] = {
+ DEFINE_PROP_STRING("devno", VirtioCcwData, bus_id),
+ DEFINE_PROP_UINT32("max_ports", VirtioCcwData, serial.max_virtserial_ports,
+ 31),
+ DEFINE_VIRTIO_COMMON_FEATURES(VirtioCcwData, host_features[0]),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_ccw_serial_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtIOCCWDeviceClass *k = VIRTIO_CCW_DEVICE_CLASS(klass);
+
+ k->init = virtio_ccw_serial_init;
+ k->exit = virtio_ccw_serial_exit;
+ dc->props = virtio_ccw_serial_properties;
+}
+
+static TypeInfo virtio_ccw_serial = {
+ .name = "virtio-serial-ccw",
+ .parent = TYPE_VIRTIO_CCW_DEVICE,
+ .instance_size = sizeof(VirtioCcwData),
+ .class_init = virtio_ccw_serial_class_init,
+};
+
+static Property virtio_ccw_balloon_properties[] = {
+ DEFINE_PROP_STRING("devno", VirtioCcwData, bus_id),
+ DEFINE_VIRTIO_COMMON_FEATURES(VirtioCcwData, host_features[0]),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_ccw_balloon_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtIOCCWDeviceClass *k = VIRTIO_CCW_DEVICE_CLASS(klass);
+
+ k->init = virtio_ccw_balloon_init;
+ k->exit = virtio_ccw_balloon_exit;
+ dc->props = virtio_ccw_balloon_properties;
+}
+
+static TypeInfo virtio_ccw_balloon = {
+ .name = "virtio-balloon-ccw",
+ .parent = TYPE_VIRTIO_CCW_DEVICE,
+ .instance_size = sizeof(VirtioCcwData),
+ .class_init = virtio_ccw_balloon_class_init,
+};
+
+static Property virtio_ccw_scsi_properties[] = {
+ DEFINE_PROP_STRING("devno", VirtioCcwData, bus_id),
+ DEFINE_VIRTIO_SCSI_PROPERTIES(VirtioCcwData, host_features[0], scsi),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_ccw_scsi_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtIOCCWDeviceClass *k = VIRTIO_CCW_DEVICE_CLASS(klass);
+
+ k->init = virtio_ccw_scsi_init;
+ k->exit = virtio_ccw_scsi_exit;
+ dc->props = virtio_ccw_scsi_properties;
+}
+
+static TypeInfo virtio_ccw_scsi = {
+ .name = "virtio-scsi-ccw",
+ .parent = TYPE_VIRTIO_CCW_DEVICE,
+ .instance_size = sizeof(VirtioCcwData),
+ .class_init = virtio_ccw_scsi_class_init,
+};
+
+static int virtio_ccw_busdev_init(DeviceState *dev)
+{
+ VirtioCcwData *_dev = (VirtioCcwData *)dev;
+ VirtIOCCWDeviceClass *_info = VIRTIO_CCW_DEVICE_GET_CLASS(dev);
+
+ return _info->init(_dev);
+}
+
+static int virtio_ccw_busdev_exit(DeviceState *dev)
+{
+ VirtioCcwData *_dev = (VirtioCcwData *)dev;
+ VirtIOCCWDeviceClass *_info = VIRTIO_CCW_DEVICE_GET_CLASS(dev);
+
+ return _info->exit(_dev);
+}
+
+static int virtio_ccw_busdev_unplug(DeviceState *dev)
+{
+ VirtioCcwData *_dev = (VirtioCcwData *)dev;
+ SubchDev *sch = _dev->sch;
+
+ /*
+ * We should arrive here only for device_del, since we don't support
+ * direct hot(un)plug of channels, but only through virtio.
+ */
+ assert(sch != NULL);
+ /* Subchannel is now disabled and no longer valid. */
+ sch->curr_status.pmcw.ena = 0;
+ sch->curr_status.pmcw.dnv = 0;
+
+ css_generate_sch_crws(sch->cssid, sch->ssid, sch->schid, 1, 0);
+
+ object_unparent(OBJECT(dev));
+ qdev_free(dev);
+ return 0;
+}
+
+static void virtio_ccw_device_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+
+ dc->init = virtio_ccw_busdev_init;
+ dc->exit = virtio_ccw_busdev_exit;
+ dc->unplug = virtio_ccw_busdev_unplug;
+ dc->bus_type = TYPE_VIRTIO_CCW_BUS;
+
+}
+
+static TypeInfo virtio_ccw_device_info = {
+ .name = TYPE_VIRTIO_CCW_DEVICE,
+ .parent = TYPE_DEVICE,
+ .instance_size = sizeof(VirtioCcwData),
+ .class_init = virtio_ccw_device_class_init,
+ .class_size = sizeof(VirtIOCCWDeviceClass),
+ .abstract = true,
+};
+
+/***************** Virtio-ccw Bus Bridge Device ********************/
+/* Only required to have the virtio bus as child in the system bus */
+
+static int virtio_ccw_bridge_init(SysBusDevice *dev)
+{
+ /* nothing */
+ return 0;
+}
+
+static void virtio_ccw_bridge_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ SysBusDeviceClass *k = SYS_BUS_DEVICE_CLASS(klass);
+
+ k->init = virtio_ccw_bridge_init;
+ dc->no_user = 1;
+}
+
+static TypeInfo virtio_ccw_bridge_info = {
+ .name = "virtio-ccw-bridge",
+ .parent = TYPE_SYS_BUS_DEVICE,
+ .instance_size = sizeof(SysBusDevice),
+ .class_init = virtio_ccw_bridge_class_init,
+};
+
+static void virtio_ccw_register(void)
+{
+ type_register_static(&virtio_ccw_bus_info);
+ type_register_static(&virtio_ccw_device_info);
+ type_register_static(&virtio_ccw_serial);
+ type_register_static(&virtio_ccw_blk);
+ type_register_static(&virtio_ccw_net);
+ type_register_static(&virtio_ccw_balloon);
+ type_register_static(&virtio_ccw_scsi);
+ type_register_static(&virtio_ccw_bridge_info);
+}
+type_init(virtio_ccw_register);
diff --git a/hw/s390x/virtio-ccw.h b/hw/s390x/virtio-ccw.h
new file mode 100644
index 0000000..f0dee1e
--- /dev/null
+++ b/hw/s390x/virtio-ccw.h
@@ -0,0 +1,81 @@
+/*
+ * virtio ccw target definitions
+ *
+ * Copyright 2012 IBM Corp.
+ * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#include <hw/virtio-blk.h>
+#include <hw/virtio-net.h>
+#include <hw/virtio-serial.h>
+#include <hw/virtio-scsi.h>
+
+#define VIRTUAL_CSSID 0xfe
+
+#define VIRTIO_CCW_CU_TYPE 0x3832
+#define VIRTIO_CCW_CHPID_TYPE 0x32
+
+#define CCW_CMD_SET_VQ 0x13
+#define CCW_CMD_VDEV_RESET 0x33
+#define CCW_CMD_READ_FEAT 0x12
+#define CCW_CMD_WRITE_FEAT 0x11
+#define CCW_CMD_READ_CONF 0x22
+#define CCW_CMD_WRITE_CONF 0x21
+#define CCW_CMD_WRITE_STATUS 0x31
+#define CCW_CMD_SET_IND 0x43
+#define CCW_CMD_SET_CONF_IND 0x53
+#define CCW_CMD_READ_VQ_CONF 0x32
+
+#define TYPE_VIRTIO_CCW_DEVICE "virtio-ccw-device"
+#define VIRTIO_CCW_DEVICE(obj) \
+ OBJECT_CHECK(VirtioCcwData, (obj), TYPE_VIRTIO_CCW_DEVICE)
+#define VIRTIO_CCW_DEVICE_CLASS(klass) \
+ OBJECT_CLASS_CHECK(VirtIOCCWDeviceClass, (klass), TYPE_VIRTIO_CCW_DEVICE)
+#define VIRTIO_CCW_DEVICE_GET_CLASS(obj) \
+ OBJECT_GET_CLASS(VirtIOCCWDeviceClass, (obj), TYPE_VIRTIO_CCW_DEVICE)
+
+#define TYPE_VIRTIO_CCW_BUS "virtio-ccw-bus"
+#define VIRTIO_CCW_BUS(obj) \
+ OBJECT_CHECK(VirtioCcwBus, (obj), TYPE_VIRTIO_CCW_BUS)
+
+typedef struct VirtioCcwData VirtioCcwData;
+
+typedef struct VirtIOCCWDeviceClass {
+ DeviceClass qdev;
+ int (*init)(VirtioCcwData *dev);
+ int (*exit)(VirtioCcwData *dev);
+} VirtIOCCWDeviceClass;
+
+/* Change here if we want to support more feature bits. */
+#define VIRTIO_CCW_FEATURE_SIZE 1
+
+struct VirtioCcwData {
+ DeviceState qdev;
+ SubchDev *sch;
+ VirtIODevice *vdev;
+ char *bus_id;
+ VirtIOBlkConf blk;
+ NICConf nic;
+ uint32_t host_features[VIRTIO_CCW_FEATURE_SIZE];
+ virtio_serial_conf serial;
+ virtio_net_conf net;
+ VirtIOSCSIConf scsi;
+ /* Guest provided values: */
+ hwaddr indicators;
+ hwaddr indicators2;
+};
+
+/* virtio-ccw bus type */
+typedef struct VirtioCcwBus {
+ BusState bus;
+ VirtioCcwData *console;
+} VirtioCcwBus;
+
+VirtioCcwBus *virtio_ccw_bus_init(void);
+void virtio_ccw_device_update_status(SubchDev *sch);
+VirtioCcwData *virtio_ccw_bus_console(VirtioCcwBus *bus);
+VirtIODevice *virtio_ccw_get_vdev(SubchDev *sch);
--
1.7.12.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support.
2012-10-31 16:24 ` [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support Cornelia Huck
@ 2012-11-13 1:17 ` Marcelo Tosatti
2012-11-13 10:11 ` Cornelia Huck
2012-11-19 13:30 ` Alexander Graf
1 sibling, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2012-11-13 1:17 UTC (permalink / raw)
To: Cornelia Huck
Cc: linux-s390, Anthony Liguori, KVM, Carsten Otte, Sebastian Ott,
Heiko Carstens, qemu-devel, Alexander Graf, Christian Borntraeger,
Avi Kivity, Martin Schwidefsky
Hi Cornelia,
On Wed, Oct 31, 2012 at 05:24:47PM +0100, Cornelia Huck wrote:
> Provide a mechanism for qemu to provide fully virtual subchannels to
> the guest. In the KVM case, this relies on the kernel's css support
> for I/O and machine check interrupt handling. The !KVM case handles
> interrupts on its own.
>
> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> ---
> hw/s390x/Makefile.objs | 1 +
> hw/s390x/css.c | 1209 ++++++++++++++++++++++++++++++++++++++++++++
> hw/s390x/css.h | 90 ++++
> target-s390x/Makefile.objs | 2 +-
> target-s390x/cpu.h | 232 +++++++++
> target-s390x/helper.c | 146 ++++++
> target-s390x/ioinst.c | 737 +++++++++++++++++++++++++++
> target-s390x/ioinst.h | 213 ++++++++
> target-s390x/kvm.c | 251 ++++++++-
> target-s390x/misc_helper.c | 6 +-
> 10 files changed, 2872 insertions(+), 15 deletions(-)
> create mode 100644 hw/s390x/css.c
> create mode 100644 hw/s390x/css.h
> create mode 100644 target-s390x/ioinst.c
> create mode 100644 target-s390x/ioinst.h
> +void kvm_s390_enable_css_support(CPUS390XState *env)
> +{
> + struct kvm_enable_cap cap = {};
> + int r;
> +
> + /* Activate host kernel channel subsystem support. */
> + if (kvm_enabled()) {
> + /* One CPU has to run */
> + s390_add_running_cpu(env);
Care to explain this?
> +
> + cap.cap = KVM_CAP_S390_CSS_SUPPORT;
> + r = kvm_vcpu_ioctl(env, KVM_ENABLE_CAP, &cap);
> + assert(r == 0);
> + }
> +}
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support.
2012-11-13 1:17 ` Marcelo Tosatti
@ 2012-11-13 10:11 ` Cornelia Huck
0 siblings, 0 replies; 8+ messages in thread
From: Cornelia Huck @ 2012-11-13 10:11 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: linux-s390, Anthony Liguori, KVM, Carsten Otte, Sebastian Ott,
Heiko Carstens, qemu-devel, Alexander Graf, Christian Borntraeger,
Avi Kivity, Martin Schwidefsky
On Mon, 12 Nov 2012 23:17:55 -0200
Marcelo Tosatti <mtosatti@redhat.com> wrote:
> Hi Cornelia,
>
> On Wed, Oct 31, 2012 at 05:24:47PM +0100, Cornelia Huck wrote:
> > Provide a mechanism for qemu to provide fully virtual subchannels to
> > the guest. In the KVM case, this relies on the kernel's css support
> > for I/O and machine check interrupt handling. The !KVM case handles
> > interrupts on its own.
> >
> > Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > ---
> > hw/s390x/Makefile.objs | 1 +
> > hw/s390x/css.c | 1209 ++++++++++++++++++++++++++++++++++++++++++++
> > hw/s390x/css.h | 90 ++++
> > target-s390x/Makefile.objs | 2 +-
> > target-s390x/cpu.h | 232 +++++++++
> > target-s390x/helper.c | 146 ++++++
> > target-s390x/ioinst.c | 737 +++++++++++++++++++++++++++
> > target-s390x/ioinst.h | 213 ++++++++
> > target-s390x/kvm.c | 251 ++++++++-
> > target-s390x/misc_helper.c | 6 +-
> > 10 files changed, 2872 insertions(+), 15 deletions(-)
> > create mode 100644 hw/s390x/css.c
> > create mode 100644 hw/s390x/css.h
> > create mode 100644 target-s390x/ioinst.c
> > create mode 100644 target-s390x/ioinst.h
>
> > +void kvm_s390_enable_css_support(CPUS390XState *env)
> > +{
> > + struct kvm_enable_cap cap = {};
> > + int r;
> > +
> > + /* Activate host kernel channel subsystem support. */
> > + if (kvm_enabled()) {
> > + /* One CPU has to run */
> > + s390_add_running_cpu(env);
>
> Care to explain this?
Old code leftovers; I've removed it.
>
> > +
> > + cap.cap = KVM_CAP_S390_CSS_SUPPORT;
> > + r = kvm_vcpu_ioctl(env, KVM_ENABLE_CAP, &cap);
> > + assert(r == 0);
> > + }
> > +}
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support.
2012-10-31 16:24 ` [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support Cornelia Huck
2012-11-13 1:17 ` Marcelo Tosatti
@ 2012-11-19 13:30 ` Alexander Graf
2012-11-20 9:27 ` Cornelia Huck
1 sibling, 1 reply; 8+ messages in thread
From: Alexander Graf @ 2012-11-19 13:30 UTC (permalink / raw)
To: Cornelia Huck
Cc: linux-s390, Anthony Liguori, KVM, Carsten Otte, Sebastian Ott,
Marcelo Tosatti, Heiko Carstens, qemu-devel,
Christian Borntraeger, Avi Kivity, Martin Schwidefsky
On 31.10.2012, at 17:24, Cornelia Huck wrote:
> Provide a mechanism for qemu to provide fully virtual subchannels to
> the guest. In the KVM case, this relies on the kernel's css support
> for I/O and machine check interrupt handling. The !KVM case handles
> interrupts on its own.
>
> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> ---
> hw/s390x/Makefile.objs | 1 +
> hw/s390x/css.c | 1209 ++++++++++++++++++++++++++++++++++++++++++++
> hw/s390x/css.h | 90 ++++
> target-s390x/Makefile.objs | 2 +-
> target-s390x/cpu.h | 232 +++++++++
> target-s390x/helper.c | 146 ++++++
> target-s390x/ioinst.c | 737 +++++++++++++++++++++++++++
> target-s390x/ioinst.h | 213 ++++++++
> target-s390x/kvm.c | 251 ++++++++-
> target-s390x/misc_helper.c | 6 +-
> 10 files changed, 2872 insertions(+), 15 deletions(-)
> create mode 100644 hw/s390x/css.c
> create mode 100644 hw/s390x/css.h
> create mode 100644 target-s390x/ioinst.c
> create mode 100644 target-s390x/ioinst.h
>
> diff --git a/hw/s390x/Makefile.objs b/hw/s390x/Makefile.objs
> index 096dfcd..378b099 100644
> --- a/hw/s390x/Makefile.objs
> +++ b/hw/s390x/Makefile.objs
> @@ -4,3 +4,4 @@ obj-y := $(addprefix ../,$(obj-y))
> obj-y += sclp.o
> obj-y += event-facility.o
> obj-y += sclpquiesce.o sclpconsole.o
> +obj-y += css.o
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> new file mode 100644
> index 0000000..9adffb3
> --- /dev/null
> +++ b/hw/s390x/css.c
> @@ -0,0 +1,1209 @@
> +/*
> + * Channel subsystem base support.
> + *
> + * Copyright 2012 IBM Corp.
> + * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or (at
> + * your option) any later version. See the COPYING file in the top-level
> + * directory.
> + */
> +
> +#include "qemu-thread.h"
> +#include "qemu-queue.h"
> +#include <hw/qdev.h>
> +#include "bitops.h"
> +#include "kvm.h"
> +#include "cpu.h"
> +#include "ioinst.h"
> +#include "css.h"
> +#include "virtio-ccw.h"
> +
> +typedef struct CrwContainer {
> + CRW crw;
> + QTAILQ_ENTRY(CrwContainer) sibling;
> +} CrwContainer;
> +
> +typedef struct ChpInfo {
> + uint8_t in_use;
> + uint8_t type;
> + uint8_t is_virtual;
> +} ChpInfo;
> +
> +typedef struct SubchSet {
> + SubchDev *sch[MAX_SCHID + 1];
> + unsigned long schids_used[BITS_TO_LONGS(MAX_SCHID + 1)];
> + unsigned long devnos_used[BITS_TO_LONGS(MAX_SCHID + 1)];
> +} SubchSet;
> +
> +typedef struct CssImage {
> + SubchSet *sch_set[MAX_SSID + 1];
> + ChpInfo chpids[MAX_CHPID + 1];
> +} CssImage;
> +
> +typedef struct ChannelSubSys {
> + QTAILQ_HEAD(, CrwContainer) pending_crws;
> + bool do_crw_mchk;
> + bool crws_lost;
> + uint8_t max_cssid;
> + uint8_t max_ssid;
> + bool chnmon_active;
> + uint64_t chnmon_area;
> + CssImage *css[MAX_CSSID + 1];
> + uint8_t default_cssid;
> +} ChannelSubSys;
> +
> +static ChannelSubSys *channel_subsys;
> +
> +int css_create_css_image(uint8_t cssid, bool default_image)
> +{
> + if (cssid > MAX_CSSID) {
> + return -EINVAL;
> + }
> + if (channel_subsys->css[cssid]) {
> + return -EBUSY;
> + }
> + channel_subsys->css[cssid] = g_try_malloc0(sizeof(CssImage));
> + if (!channel_subsys->css[cssid]) {
> + return -ENOMEM;
> + }
> + if (default_image) {
> + channel_subsys->default_cssid = cssid;
> + }
> + return 0;
> +}
> +
> +static void css_write_phys_pmcw(uint64_t addr, PMCW *pmcw)
> +{
> + int i;
> + uint32_t offset = 0;
> + struct copy_pmcw {
> + uint32_t intparm;
> + uint16_t flags;
> + uint16_t devno;
> + uint8_t lpm;
> + uint8_t pnom;
> + uint8_t lpum;
> + uint8_t pim;
> + uint16_t mbi;
> + uint8_t pom;
> + uint8_t pam;
> + uint8_t chpid[8];
> + uint32_t chars;
> + } *copy;
This needs to be packed. Also, it might be a good idea to separate the struct definition from the actual code ;).
> +
> + copy = (struct copy_pmcw *)pmcw;
This will break on any system that doesn't coincidently stick to the same bitfield order as s390x. Please drop any usage of bitfields in QEMU source code :).
> + stl_phys(addr + offset, copy->intparm);
> + offset += sizeof(copy->intparm);
Can't you just use cpu_physical_memory_map() and assign things left and right as you see fit? Or prepare the target endianness struct on the stack and cpu_physical_memory_read/write it from/to guest memory.
Also, please split this patch into smaller patches :). As it is now it's very hard to review. However, apart from the above issues (which may happen in other places of the code further down, I just didn't comment then) I couldn't see major problems so far. But please split it nevertheless so that I have an easier time reviewing it :)
Alex
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support.
2012-11-19 13:30 ` Alexander Graf
@ 2012-11-20 9:27 ` Cornelia Huck
0 siblings, 0 replies; 8+ messages in thread
From: Cornelia Huck @ 2012-11-20 9:27 UTC (permalink / raw)
To: Alexander Graf
Cc: linux-s390, Anthony Liguori, KVM, Carsten Otte, Sebastian Ott,
Marcelo Tosatti, Heiko Carstens, qemu-devel,
Christian Borntraeger, Avi Kivity, Martin Schwidefsky
On Mon, 19 Nov 2012 14:30:00 +0100
Alexander Graf <agraf@suse.de> wrote:
>
> On 31.10.2012, at 17:24, Cornelia Huck wrote:
>
> > Provide a mechanism for qemu to provide fully virtual subchannels to
> > the guest. In the KVM case, this relies on the kernel's css support
> > for I/O and machine check interrupt handling. The !KVM case handles
> > interrupts on its own.
> >
> > Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> > ---
> > hw/s390x/Makefile.objs | 1 +
> > hw/s390x/css.c | 1209 ++++++++++++++++++++++++++++++++++++++++++++
> > hw/s390x/css.h | 90 ++++
> > target-s390x/Makefile.objs | 2 +-
> > target-s390x/cpu.h | 232 +++++++++
> > target-s390x/helper.c | 146 ++++++
> > target-s390x/ioinst.c | 737 +++++++++++++++++++++++++++
> > target-s390x/ioinst.h | 213 ++++++++
> > target-s390x/kvm.c | 251 ++++++++-
> > target-s390x/misc_helper.c | 6 +-
> > 10 files changed, 2872 insertions(+), 15 deletions(-)
> > create mode 100644 hw/s390x/css.c
> > create mode 100644 hw/s390x/css.h
> > create mode 100644 target-s390x/ioinst.c
> > create mode 100644 target-s390x/ioinst.h
> >
> > diff --git a/hw/s390x/Makefile.objs b/hw/s390x/Makefile.objs
> > index 096dfcd..378b099 100644
> > --- a/hw/s390x/Makefile.objs
> > +++ b/hw/s390x/Makefile.objs
> > @@ -4,3 +4,4 @@ obj-y := $(addprefix ../,$(obj-y))
> > obj-y += sclp.o
> > obj-y += event-facility.o
> > obj-y += sclpquiesce.o sclpconsole.o
> > +obj-y += css.o
> > diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> > new file mode 100644
> > index 0000000..9adffb3
> > --- /dev/null
> > +++ b/hw/s390x/css.c
> > @@ -0,0 +1,1209 @@
> > +/*
> > + * Channel subsystem base support.
> > + *
> > + * Copyright 2012 IBM Corp.
> > + * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or (at
> > + * your option) any later version. See the COPYING file in the top-level
> > + * directory.
> > + */
> > +
> > +#include "qemu-thread.h"
> > +#include "qemu-queue.h"
> > +#include <hw/qdev.h>
> > +#include "bitops.h"
> > +#include "kvm.h"
> > +#include "cpu.h"
> > +#include "ioinst.h"
> > +#include "css.h"
> > +#include "virtio-ccw.h"
> > +
> > +typedef struct CrwContainer {
> > + CRW crw;
> > + QTAILQ_ENTRY(CrwContainer) sibling;
> > +} CrwContainer;
> > +
> > +typedef struct ChpInfo {
> > + uint8_t in_use;
> > + uint8_t type;
> > + uint8_t is_virtual;
> > +} ChpInfo;
> > +
> > +typedef struct SubchSet {
> > + SubchDev *sch[MAX_SCHID + 1];
> > + unsigned long schids_used[BITS_TO_LONGS(MAX_SCHID + 1)];
> > + unsigned long devnos_used[BITS_TO_LONGS(MAX_SCHID + 1)];
> > +} SubchSet;
> > +
> > +typedef struct CssImage {
> > + SubchSet *sch_set[MAX_SSID + 1];
> > + ChpInfo chpids[MAX_CHPID + 1];
> > +} CssImage;
> > +
> > +typedef struct ChannelSubSys {
> > + QTAILQ_HEAD(, CrwContainer) pending_crws;
> > + bool do_crw_mchk;
> > + bool crws_lost;
> > + uint8_t max_cssid;
> > + uint8_t max_ssid;
> > + bool chnmon_active;
> > + uint64_t chnmon_area;
> > + CssImage *css[MAX_CSSID + 1];
> > + uint8_t default_cssid;
> > +} ChannelSubSys;
> > +
> > +static ChannelSubSys *channel_subsys;
> > +
> > +int css_create_css_image(uint8_t cssid, bool default_image)
> > +{
> > + if (cssid > MAX_CSSID) {
> > + return -EINVAL;
> > + }
> > + if (channel_subsys->css[cssid]) {
> > + return -EBUSY;
> > + }
> > + channel_subsys->css[cssid] = g_try_malloc0(sizeof(CssImage));
> > + if (!channel_subsys->css[cssid]) {
> > + return -ENOMEM;
> > + }
> > + if (default_image) {
> > + channel_subsys->default_cssid = cssid;
> > + }
> > + return 0;
> > +}
> > +
> > +static void css_write_phys_pmcw(uint64_t addr, PMCW *pmcw)
> > +{
> > + int i;
> > + uint32_t offset = 0;
> > + struct copy_pmcw {
> > + uint32_t intparm;
> > + uint16_t flags;
> > + uint16_t devno;
> > + uint8_t lpm;
> > + uint8_t pnom;
> > + uint8_t lpum;
> > + uint8_t pim;
> > + uint16_t mbi;
> > + uint8_t pom;
> > + uint8_t pam;
> > + uint8_t chpid[8];
> > + uint32_t chars;
> > + } *copy;
>
> This needs to be packed. Also, it might be a good idea to separate the struct definition from the actual code ;).
>
> > +
> > + copy = (struct copy_pmcw *)pmcw;
>
> This will break on any system that doesn't coincidently stick to the same bitfield order as s390x. Please drop any usage of bitfields in QEMU source code :).
>
> > + stl_phys(addr + offset, copy->intparm);
> > + offset += sizeof(copy->intparm);
>
> Can't you just use cpu_physical_memory_map() and assign things left and right as you see fit? Or prepare the target endianness struct on the stack and cpu_physical_memory_read/write it from/to guest memory.
All that copying stuff (other places as well) was still on my todo list
- just wanted to get the patches out of the door so people could take a
look at the interface.
>
> Also, please split this patch into smaller patches :). As it is now it's very hard to review. However, apart from the above issues (which may happen in other places of the code further down, I just didn't comment then) I couldn't see major problems so far. But please split it nevertheless so that I have an easier time reviewing it :)
I'll try, but I found it hard to come up with a logical split.
>
>
> Alex
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-11-20 9:28 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-31 16:24 [Qemu-devel] [RFC PATCH v3 0/3] s390: channel I/O support in qemu Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 1/3] Update linux headers Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 2/3] s390: Virtual channel subsystem support Cornelia Huck
2012-11-13 1:17 ` Marcelo Tosatti
2012-11-13 10:11 ` Cornelia Huck
2012-11-19 13:30 ` Alexander Graf
2012-11-20 9:27 ` Cornelia Huck
2012-10-31 16:24 ` [Qemu-devel] [PATCH 3/3] s390: Add new channel I/O based virtio transport Cornelia Huck
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).