* [PATCH v8 0/8] Support RISC-V IOPMP
@ 2024-07-15 9:56 Ethan Chen via
2024-07-15 9:56 ` [PATCH v8 1/8] memory: Introduce memory region fetch operation Ethan Chen via
` (8 more replies)
0 siblings, 9 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 9:56 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
This series implements basic functions of IOPMP specification v0.9.1 rapid-k
model.
The specification url:
https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
When IOPMP is enabled, memory access to system memory from devices and
the CPU will be checked by the IOPMP.
The issue of CPU access to non-CPU address space via IOMMU was previously
mentioned by Jim Shu, who provided a patch[1] to fix it. IOPMP also requires
this patch.
[1] accel/tcg: Store section pointer in CPUTLBEntryFull
https://patchew.org/QEMU/20240612081416.29704-1-jim.shu@sifive.com/20240612081416.29704-2-jim.shu@sifive.com/
Changes for v8:
- Support transactions from CPU
- Add an API to set up IOPMP protection for system memory
- Add an API to configure the RISCV CPU to support IOPMP and specify the
CPU's RRID
- Add an API for DMA operation with IOPMP support
- Add SPDX license identifiers to new files (Stefan W.)
- Remove IOPMP PCI interface(pci_setup_iommu) (Zhiwei)
Changes for v7:
- Change the specification version to v0.9.1
- Remove the sps extension
- Remove stall support, transaction information which need requestor device
support.
- Remove iopmp_cascade option for virt machine
- Refine 'addr' range checks switch case (Daniel)
Ethan Chen (8):
memory: Introduce memory region fetch operation
system/physmem: Support IOMMU granularity smaller than TARGET_PAGE
size
target/riscv: Add support for IOPMP
hw/misc/riscv_iopmp: Add RISC-V IOPMP device
hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system
memory
hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support
hw/misc/riscv_iopmp: Add DMA operation with IOPMP support API
hw/riscv/virt: Add IOPMP support
accel/tcg/cputlb.c | 29 +-
docs/system/riscv/virt.rst | 5 +
hw/misc/Kconfig | 3 +
hw/misc/meson.build | 1 +
hw/misc/riscv_iopmp.c | 1289 +++++++++++++++++++++++++++++++++
hw/misc/trace-events | 3 +
hw/riscv/Kconfig | 1 +
hw/riscv/virt.c | 63 ++
include/exec/memory.h | 30 +
include/hw/misc/riscv_iopmp.h | 173 +++++
include/hw/riscv/virt.h | 5 +-
system/memory.c | 104 +++
system/physmem.c | 4 +
system/trace-events | 2 +
target/riscv/cpu_cfg.h | 2 +
target/riscv/cpu_helper.c | 18 +-
16 files changed, 1722 insertions(+), 10 deletions(-)
create mode 100644 hw/misc/riscv_iopmp.c
create mode 100644 include/hw/misc/riscv_iopmp.h
--
2.34.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v8 1/8] memory: Introduce memory region fetch operation
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
@ 2024-07-15 9:56 ` Ethan Chen via
2024-07-15 9:56 ` [PATCH v8 2/8] system/physmem: Support IOMMU granularity smaller than TARGET_PAGE size Ethan Chen via
` (7 subsequent siblings)
8 siblings, 0 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 9:56 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
Allow memory regions to have different behaviors for read and fetch
operations.
For example, the RISC-V IOPMP could raise an interrupt when the CPU
tries to fetch from a non-executable region.
If the fetch operation for a memory region is not implemented, the read
operation will still be used for fetch operations.
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
accel/tcg/cputlb.c | 9 +++-
include/exec/memory.h | 30 ++++++++++++
system/memory.c | 104 ++++++++++++++++++++++++++++++++++++++++++
system/trace-events | 2 +
4 files changed, 143 insertions(+), 2 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 117b516739..edb3715017 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1942,8 +1942,13 @@ static uint64_t int_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
this_size = 1 << this_mop;
this_mop |= MO_BE;
- r = memory_region_dispatch_read(mr, mr_offset, &val,
- this_mop, full->attrs);
+ if (type == MMU_INST_FETCH) {
+ r = memory_region_dispatch_fetch(mr, mr_offset, &val,
+ this_mop, full->attrs);
+ } else {
+ r = memory_region_dispatch_read(mr, mr_offset, &val,
+ this_mop, full->attrs);
+ }
if (unlikely(r != MEMTX_OK)) {
io_failed(cpu, full, addr, this_size, type, mmu_idx, r, ra);
}
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 02f7528ec0..d837d7d7eb 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -274,6 +274,13 @@ struct MemoryRegionOps {
uint64_t data,
unsigned size);
+ /* Fetch from the memory region. @addr is relative to @mr; @size is
+ * in bytes. */
+ uint64_t (*fetch)(void *opaque,
+ hwaddr addr,
+ unsigned size);
+
+
MemTxResult (*read_with_attrs)(void *opaque,
hwaddr addr,
uint64_t *data,
@@ -284,6 +291,12 @@ struct MemoryRegionOps {
uint64_t data,
unsigned size,
MemTxAttrs attrs);
+ MemTxResult (*fetch_with_attrs)(void *opaque,
+ hwaddr addr,
+ uint64_t *data,
+ unsigned size,
+ MemTxAttrs attrs);
+
enum device_endian endianness;
/* Guest-visible constraints: */
@@ -2602,6 +2615,23 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
MemOp op,
MemTxAttrs attrs);
+
+/**
+ * memory_region_dispatch_fetch: perform a fetch directly to the specified
+ * MemoryRegion.
+ *
+ * @mr: #MemoryRegion to access
+ * @addr: address within that region
+ * @pval: pointer to uint64_t which the data is written to
+ * @op: size, sign, and endianness of the memory operation
+ * @attrs: memory transaction attributes to use for the access
+ */
+MemTxResult memory_region_dispatch_fetch(MemoryRegion *mr,
+ hwaddr addr,
+ uint64_t *pval,
+ MemOp op,
+ MemTxAttrs attrs);
+
/**
* address_space_init: initializes an address space
*
diff --git a/system/memory.c b/system/memory.c
index 5e6eb459d5..b46721446c 100644
--- a/system/memory.c
+++ b/system/memory.c
@@ -477,6 +477,51 @@ static MemTxResult memory_region_read_with_attrs_accessor(MemoryRegion *mr,
return r;
}
+static MemTxResult memory_region_fetch_accessor(MemoryRegion *mr,
+ hwaddr addr,
+ uint64_t *value,
+ unsigned size,
+ signed shift,
+ uint64_t mask,
+ MemTxAttrs attrs)
+{
+ uint64_t tmp;
+
+ tmp = mr->ops->fetch(mr->opaque, addr, size);
+ if (mr->subpage) {
+ trace_memory_region_subpage_fetch(get_cpu_index(), mr, addr, tmp, size);
+ } else if (trace_event_get_state_backends(TRACE_MEMORY_REGION_OPS_FETCH)) {
+ hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
+ trace_memory_region_ops_fetch(get_cpu_index(), mr, abs_addr, tmp, size,
+ memory_region_name(mr));
+ }
+ memory_region_shift_read_access(value, shift, mask, tmp);
+ return MEMTX_OK;
+}
+
+static MemTxResult memory_region_fetch_with_attrs_accessor(MemoryRegion *mr,
+ hwaddr addr,
+ uint64_t *value,
+ unsigned size,
+ signed shift,
+ uint64_t mask,
+ MemTxAttrs attrs)
+{
+ uint64_t tmp = 0;
+ MemTxResult r;
+
+ r = mr->ops->fetch_with_attrs(mr->opaque, addr, &tmp, size, attrs);
+ if (mr->subpage) {
+ trace_memory_region_subpage_fetch(get_cpu_index(), mr, addr, tmp, size);
+ } else if (trace_event_get_state_backends(TRACE_MEMORY_REGION_OPS_FETCH)) {
+ hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
+ trace_memory_region_ops_fetch(get_cpu_index(), mr, abs_addr, tmp, size,
+ memory_region_name(mr));
+ }
+ memory_region_shift_read_access(value, shift, mask, tmp);
+ return r;
+}
+
static MemTxResult memory_region_write_accessor(MemoryRegion *mr,
hwaddr addr,
uint64_t *value,
@@ -1461,6 +1506,65 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
return r;
}
+static MemTxResult memory_region_dispatch_fetch1(MemoryRegion *mr,
+ hwaddr addr,
+ uint64_t *pval,
+ unsigned size,
+ MemTxAttrs attrs)
+{
+ *pval = 0;
+
+ if (mr->ops->fetch) {
+ return access_with_adjusted_size(addr, pval, size,
+ mr->ops->impl.min_access_size,
+ mr->ops->impl.max_access_size,
+ memory_region_fetch_accessor,
+ mr, attrs);
+ } else if (mr->ops->fetch_with_attrs) {
+ return access_with_adjusted_size(addr, pval, size,
+ mr->ops->impl.min_access_size,
+ mr->ops->impl.max_access_size,
+ memory_region_fetch_with_attrs_accessor,
+ mr, attrs);
+ } else if (mr->ops->read) {
+ return access_with_adjusted_size(addr, pval, size,
+ mr->ops->impl.min_access_size,
+ mr->ops->impl.max_access_size,
+ memory_region_read_accessor,
+ mr, attrs);
+ } else {
+ return access_with_adjusted_size(addr, pval, size,
+ mr->ops->impl.min_access_size,
+ mr->ops->impl.max_access_size,
+ memory_region_read_with_attrs_accessor,
+ mr, attrs);
+ }
+}
+
+MemTxResult memory_region_dispatch_fetch(MemoryRegion *mr,
+ hwaddr addr,
+ uint64_t *pval,
+ MemOp op,
+ MemTxAttrs attrs)
+{
+ unsigned size = memop_size(op);
+ MemTxResult r;
+
+ if (mr->alias) {
+ return memory_region_dispatch_fetch(mr->alias,
+ mr->alias_offset + addr,
+ pval, op, attrs);
+ }
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
+ *pval = unassigned_mem_read(mr, addr, size);
+ return MEMTX_DECODE_ERROR;
+ }
+
+ r = memory_region_dispatch_fetch1(mr, addr, pval, size, attrs);
+ adjust_endianness(mr, pval, op);
+ return r;
+}
+
/* Return true if an eventfd was signalled */
static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
hwaddr addr,
diff --git a/system/trace-events b/system/trace-events
index 2ed1d59b1f..a8fc70f28f 100644
--- a/system/trace-events
+++ b/system/trace-events
@@ -11,8 +11,10 @@ cpu_out(unsigned int addr, char size, unsigned int val) "addr 0x%x(%c) value %u"
# memory.c
memory_region_ops_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'"
memory_region_ops_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'"
+memory_region_ops_fetch(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'"
memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
memory_region_subpage_write(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
+memory_region_subpage_fetch(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
memory_region_sync_dirty(const char *mr, const char *listener, int global) "mr '%s' listener '%s' synced (global=%d)"
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 2/8] system/physmem: Support IOMMU granularity smaller than TARGET_PAGE size
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
2024-07-15 9:56 ` [PATCH v8 1/8] memory: Introduce memory region fetch operation Ethan Chen via
@ 2024-07-15 9:56 ` Ethan Chen via
2024-08-08 4:12 ` Alistair Francis
2024-07-15 9:56 ` [PATCH v8 3/8] target/riscv: Add support for IOPMP Ethan Chen via
` (6 subsequent siblings)
8 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 9:56 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
If the IOMMU granularity is smaller than the TARGET_PAGE size, there may be
multiple entries within the same page. To obtain the correct result, pass
the original address to the IOMMU.
Similar to the RISC-V PMP solution, the TLB_INVALID_MASK will be set when
there are multiple entries in the same page, ensuring that the IOMMU is
checked on every access.
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
accel/tcg/cputlb.c | 20 ++++++++++++++++----
system/physmem.c | 4 ++++
2 files changed, 20 insertions(+), 4 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index edb3715017..7df106fea3 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1062,8 +1062,23 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
prot = full->prot;
asidx = cpu_asidx_from_attrs(cpu, full->attrs);
- section = address_space_translate_for_iotlb(cpu, asidx, paddr_page,
+ section = address_space_translate_for_iotlb(cpu, asidx, full->phys_addr,
&xlat, &sz, full->attrs, &prot);
+ /* Update page size */
+ full->lg_page_size = ctz64(sz);
+ if (full->lg_page_size > TARGET_PAGE_BITS) {
+ full->lg_page_size = TARGET_PAGE_BITS;
+ } else {
+ sz = TARGET_PAGE_SIZE;
+ }
+
+ is_ram = memory_region_is_ram(section->mr);
+ is_romd = memory_region_is_romd(section->mr);
+ /* If the translated mr is ram/rom, make xlat align the TARGET_PAGE */
+ if (is_ram || is_romd) {
+ xlat &= TARGET_PAGE_MASK;
+ }
+
assert(sz >= TARGET_PAGE_SIZE);
tlb_debug("vaddr=%016" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx
@@ -1076,9 +1091,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
read_flags |= TLB_INVALID_MASK;
}
- is_ram = memory_region_is_ram(section->mr);
- is_romd = memory_region_is_romd(section->mr);
-
if (is_ram || is_romd) {
/* RAM and ROMD both have associated host memory. */
addend = (uintptr_t)memory_region_get_ram_ptr(section->mr) + xlat;
diff --git a/system/physmem.c b/system/physmem.c
index 2154432cb6..346b015447 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -702,6 +702,10 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr,
iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx);
addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
| (addr & iotlb.addr_mask));
+ /* Update size */
+ if (iotlb.addr_mask != -1 && *plen > iotlb.addr_mask + 1) {
+ *plen = iotlb.addr_mask + 1;
+ }
/* Update the caller's prot bits to remove permissions the IOMMU
* is giving us a failure response for. If we get down to no
* permissions left at all we can give up now.
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 3/8] target/riscv: Add support for IOPMP
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
2024-07-15 9:56 ` [PATCH v8 1/8] memory: Introduce memory region fetch operation Ethan Chen via
2024-07-15 9:56 ` [PATCH v8 2/8] system/physmem: Support IOMMU granularity smaller than TARGET_PAGE size Ethan Chen via
@ 2024-07-15 9:56 ` Ethan Chen via
2024-08-08 4:13 ` Alistair Francis
2024-07-15 9:56 ` [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device Ethan Chen via
` (5 subsequent siblings)
8 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 9:56 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
target/riscv/cpu_cfg.h | 2 ++
target/riscv/cpu_helper.c | 18 +++++++++++++++---
2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index fb7eebde52..2946fec20c 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -164,6 +164,8 @@ struct RISCVCPUConfig {
bool pmp;
bool debug;
bool misa_w;
+ bool iopmp;
+ uint32_t iopmp_rrid;
bool short_isa_string;
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 6709622dd3..c2d6a874da 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -1418,9 +1418,21 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
}
if (ret == TRANSLATE_SUCCESS) {
- tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
- prot, mmu_idx, tlb_size);
- return true;
+ if (cpu->cfg.iopmp) {
+ /*
+ * Do not align address on early stage because IOPMP needs origin
+ * address for permission check.
+ */
+ tlb_set_page_with_attrs(cs, address, pa,
+ (MemTxAttrs)
+ {
+ .requester_id = cpu->cfg.iopmp_rrid,
+ },
+ prot, mmu_idx, tlb_size);
+ } else {
+ tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
+ prot, mmu_idx, tlb_size);
+ }
} else if (probe) {
return false;
} else {
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
` (2 preceding siblings ...)
2024-07-15 9:56 ` [PATCH v8 3/8] target/riscv: Add support for IOPMP Ethan Chen via
@ 2024-07-15 9:56 ` Ethan Chen via
2024-08-08 3:56 ` Alistair Francis
2024-07-15 10:12 ` [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory Ethan Chen via
` (4 subsequent siblings)
8 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 9:56 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
Support basic functions of IOPMP specification v0.9.1 rapid-k model.
The specification url:
https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
The IOPMP checks whether memory access from a device or CPU is valid.
This implementation uses an IOMMU to modify the address space accessed
by the device.
For device access with IOMMUAccessFlags specifying read or write
(IOMMU_RO or IOMMU_WO), the IOPMP checks the permission in
iopmp_translate. If the access is valid, the target address space is
downstream_as. If the access is blocked, it will be redirected to
blocked_rwx_as.
For CPU access with IOMMUAccessFlags not specifying read or write
(IOMMU_NONE), the IOPMP translates the access to the corresponding
address space based on the permission. If the access has full permission
(rwx), the target address space is downstream_as. If the access has
limited permission, the target address space is blocked_ followed by
the lacked permissions.
The operation of a blocked region can trigger an IOPMP interrupt, a bus
error, or it can respond with success and fabricated data, depending on
the value of the IOPMP ERR_CFG register.
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
hw/misc/Kconfig | 3 +
hw/misc/meson.build | 1 +
hw/misc/riscv_iopmp.c | 1154 +++++++++++++++++++++++++++++++++
hw/misc/trace-events | 3 +
include/hw/misc/riscv_iopmp.h | 168 +++++
5 files changed, 1329 insertions(+)
create mode 100644 hw/misc/riscv_iopmp.c
create mode 100644 include/hw/misc/riscv_iopmp.h
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
index 1e08785b83..427f0c702e 100644
--- a/hw/misc/Kconfig
+++ b/hw/misc/Kconfig
@@ -213,4 +213,7 @@ config IOSB
config XLNX_VERSAL_TRNG
bool
+config RISCV_IOPMP
+ bool
+
source macio/Kconfig
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
index 2ca8717be2..d9006e1d81 100644
--- a/hw/misc/meson.build
+++ b/hw/misc/meson.build
@@ -34,6 +34,7 @@ system_ss.add(when: 'CONFIG_SIFIVE_E_PRCI', if_true: files('sifive_e_prci.c'))
system_ss.add(when: 'CONFIG_SIFIVE_E_AON', if_true: files('sifive_e_aon.c'))
system_ss.add(when: 'CONFIG_SIFIVE_U_OTP', if_true: files('sifive_u_otp.c'))
system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c'))
+specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp.c'))
subdir('macio')
diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
new file mode 100644
index 0000000000..db43e3c73f
--- /dev/null
+++ b/hw/misc/riscv_iopmp.c
@@ -0,0 +1,1154 @@
+/*
+ * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
+ *
+ * Copyright (c) 2023-2024 Andes Tech. Corp.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "qapi/error.h"
+#include "trace.h"
+#include "exec/exec-all.h"
+#include "exec/address-spaces.h"
+#include "hw/qdev-properties.h"
+#include "hw/sysbus.h"
+#include "hw/misc/riscv_iopmp.h"
+#include "memory.h"
+#include "hw/irq.h"
+#include "hw/registerfields.h"
+#include "trace.h"
+
+#define TYPE_IOPMP_IOMMU_MEMORY_REGION "iopmp-iommu-memory-region"
+
+REG32(VERSION, 0x00)
+ FIELD(VERSION, VENDOR, 0, 24)
+ FIELD(VERSION, SPECVER , 24, 8)
+REG32(IMP, 0x04)
+ FIELD(IMP, IMPID, 0, 32)
+REG32(HWCFG0, 0x08)
+ FIELD(HWCFG0, MODEL, 0, 4)
+ FIELD(HWCFG0, TOR_EN, 4, 1)
+ FIELD(HWCFG0, SPS_EN, 5, 1)
+ FIELD(HWCFG0, USER_CFG_EN, 6, 1)
+ FIELD(HWCFG0, PRIENT_PROG, 7, 1)
+ FIELD(HWCFG0, RRID_TRANSL_EN, 8, 1)
+ FIELD(HWCFG0, RRID_TRANSL_PROG, 9, 1)
+ FIELD(HWCFG0, CHK_X, 10, 1)
+ FIELD(HWCFG0, NO_X, 11, 1)
+ FIELD(HWCFG0, NO_W, 12, 1)
+ FIELD(HWCFG0, STALL_EN, 13, 1)
+ FIELD(HWCFG0, PEIS, 14, 1)
+ FIELD(HWCFG0, PEES, 15, 1)
+ FIELD(HWCFG0, MFR_EN, 16, 1)
+ FIELD(HWCFG0, MD_NUM, 24, 7)
+ FIELD(HWCFG0, ENABLE, 31, 1)
+REG32(HWCFG1, 0x0C)
+ FIELD(HWCFG1, RRID_NUM, 0, 16)
+ FIELD(HWCFG1, ENTRY_NUM, 16, 16)
+REG32(HWCFG2, 0x10)
+ FIELD(HWCFG2, PRIO_ENTRY, 0, 16)
+ FIELD(HWCFG2, RRID_TRANSL, 16, 16)
+REG32(ENTRYOFFSET, 0x14)
+ FIELD(ENTRYOFFSET, OFFSET, 0, 32)
+REG32(MDSTALL, 0x30)
+ FIELD(MDSTALL, EXEMPT, 0, 1)
+ FIELD(MDSTALL, MD, 1, 31)
+REG32(MDSTALLH, 0x34)
+ FIELD(MDSTALLH, MD, 0, 32)
+REG32(RRIDSCP, 0x38)
+ FIELD(RRIDSCP, RRID, 0, 16)
+ FIELD(RRIDSCP, OP, 30, 2)
+REG32(MDLCK, 0x40)
+ FIELD(MDLCK, L, 0, 1)
+ FIELD(MDLCK, MD, 1, 31)
+REG32(MDLCKH, 0x44)
+ FIELD(MDLCKH, MDH, 0, 32)
+REG32(MDCFGLCK, 0x48)
+ FIELD(MDCFGLCK, L, 0, 1)
+ FIELD(MDCFGLCK, F, 1, 7)
+REG32(ENTRYLCK, 0x4C)
+ FIELD(ENTRYLCK, L, 0, 1)
+ FIELD(ENTRYLCK, F, 1, 16)
+REG32(ERR_CFG, 0x60)
+ FIELD(ERR_CFG, L, 0, 1)
+ FIELD(ERR_CFG, IE, 1, 1)
+ FIELD(ERR_CFG, IRE, 2, 1)
+ FIELD(ERR_CFG, IWE, 3, 1)
+ FIELD(ERR_CFG, IXE, 4, 1)
+ FIELD(ERR_CFG, RRE, 5, 1)
+ FIELD(ERR_CFG, RWE, 6, 1)
+ FIELD(ERR_CFG, RXE, 7, 1)
+REG32(ERR_REQINFO, 0x64)
+ FIELD(ERR_REQINFO, V, 0, 1)
+ FIELD(ERR_REQINFO, TTYPE, 1, 2)
+ FIELD(ERR_REQINFO, ETYPE, 4, 3)
+ FIELD(ERR_REQINFO, SVC, 7, 1)
+REG32(ERR_REQADDR, 0x68)
+ FIELD(ERR_REQADDR, ADDR, 0, 32)
+REG32(ERR_REQADDRH, 0x6C)
+ FIELD(ERR_REQADDRH, ADDRH, 0, 32)
+REG32(ERR_REQID, 0x70)
+ FIELD(ERR_REQID, RRID, 0, 16)
+ FIELD(ERR_REQID, EID, 16, 16)
+REG32(ERR_MFR, 0x74)
+ FIELD(ERR_MFR, SVW, 0, 16)
+ FIELD(ERR_MFR, SVI, 16, 12)
+ FIELD(ERR_MFR, SVS, 31, 1)
+REG32(MDCFG0, 0x800)
+ FIELD(MDCFG0, T, 0, 16)
+REG32(SRCMD_EN0, 0x1000)
+ FIELD(SRCMD_EN0, L, 0, 1)
+ FIELD(SRCMD_EN0, MD, 1, 31)
+REG32(SRCMD_ENH0, 0x1004)
+ FIELD(SRCMD_ENH0, MDH, 0, 32)
+REG32(SRCMD_R0, 0x1008)
+ FIELD(SRCMD_R0, MD, 1, 31)
+REG32(SRCMD_RH0, 0x100C)
+ FIELD(SRCMD_RH0, MDH, 0, 32)
+REG32(SRCMD_W0, 0x1010)
+ FIELD(SRCMD_W0, MD, 1, 31)
+REG32(SRCMD_WH0, 0x1014)
+ FIELD(SRCMD_WH0, MDH, 0, 32)
+
+FIELD(ENTRY_ADDR, ADDR, 0, 32)
+FIELD(ENTRY_ADDRH, ADDRH, 0, 32)
+
+FIELD(ENTRY_CFG, R, 0, 1)
+FIELD(ENTRY_CFG, W, 1, 1)
+FIELD(ENTRY_CFG, X, 2, 1)
+FIELD(ENTRY_CFG, A, 3, 2)
+FIELD(ENTRY_CFG, SIRE, 5, 1)
+FIELD(ENTRY_CFG, SIWE, 6, 1)
+FIELD(ENTRY_CFG, SIXE, 7, 1)
+FIELD(ENTRY_CFG, SERE, 8, 1)
+FIELD(ENTRY_CFG, SEWE, 9, 1)
+FIELD(ENTRY_CFG, SEXE, 10, 1)
+
+FIELD(ENTRY_USER_CFG, IM, 0, 32)
+
+/* Offsets to SRCMD_EN(i) */
+#define SRCMD_EN_OFFSET 0x0
+#define SRCMD_ENH_OFFSET 0x4
+#define SRCMD_R_OFFSET 0x8
+#define SRCMD_RH_OFFSET 0xC
+#define SRCMD_W_OFFSET 0x10
+#define SRCMD_WH_OFFSET 0x14
+
+/* Offsets to ENTRY_ADDR(i) */
+#define ENTRY_ADDR_OFFSET 0x0
+#define ENTRY_ADDRH_OFFSET 0x4
+#define ENTRY_CFG_OFFSET 0x8
+#define ENTRY_USER_CFG_OFFSET 0xC
+
+/* Memmap for parallel IOPMPs */
+typedef struct iopmp_protection_memmap {
+ MemMapEntry entry;
+ IopmpState *iopmp_s;
+ QLIST_ENTRY(iopmp_protection_memmap) list;
+} iopmp_protection_memmap;
+QLIST_HEAD(, iopmp_protection_memmap)
+ iopmp_protection_memmaps = QLIST_HEAD_INITIALIZER(iopmp_protection_memmaps);
+
+static void iopmp_iommu_notify(IopmpState *s)
+{
+ IOMMUTLBEvent event = {
+ .entry = {
+ .iova = 0,
+ .translated_addr = 0,
+ .addr_mask = -1ULL,
+ .perm = IOMMU_NONE,
+ },
+ .type = IOMMU_NOTIFIER_UNMAP,
+ };
+
+ for (int i = 0; i < s->rrid_num; i++) {
+ memory_region_notify_iommu(&s->iommu, i, event);
+ }
+}
+
+static void iopmp_decode_napot(uint64_t a, uint64_t *sa,
+ uint64_t *ea)
+{
+ /*
+ * aaaa...aaa0 8-byte NAPOT range
+ * aaaa...aa01 16-byte NAPOT range
+ * aaaa...a011 32-byte NAPOT range
+ * ...
+ * aa01...1111 2^XLEN-byte NAPOT range
+ * a011...1111 2^(XLEN+1)-byte NAPOT range
+ * 0111...1111 2^(XLEN+2)-byte NAPOT range
+ * 1111...1111 Reserved
+ */
+
+ a = (a << 2) | 0x3;
+ *sa = a & (a + 1);
+ *ea = a | (a + 1);
+}
+
+static void iopmp_update_rule(IopmpState *s, uint32_t entry_index)
+{
+ uint8_t this_cfg = s->regs.entry[entry_index].cfg_reg;
+ uint64_t this_addr = s->regs.entry[entry_index].addr_reg |
+ ((uint64_t)s->regs.entry[entry_index].addrh_reg << 32);
+ uint64_t prev_addr = 0u;
+ uint64_t sa = 0u;
+ uint64_t ea = 0u;
+
+ if (entry_index >= 1u) {
+ prev_addr = s->regs.entry[entry_index - 1].addr_reg |
+ ((uint64_t)s->regs.entry[entry_index - 1].addrh_reg << 32);
+ }
+
+ switch (FIELD_EX32(this_cfg, ENTRY_CFG, A)) {
+ case IOPMP_AMATCH_OFF:
+ sa = 0u;
+ ea = -1;
+ break;
+
+ case IOPMP_AMATCH_TOR:
+ sa = (prev_addr) << 2; /* shift up from [xx:0] to [xx+2:2] */
+ ea = ((this_addr) << 2) - 1u;
+ if (sa > ea) {
+ sa = ea = 0u;
+ }
+ break;
+
+ case IOPMP_AMATCH_NA4:
+ sa = this_addr << 2; /* shift up from [xx:0] to [xx+2:2] */
+ ea = (sa + 4u) - 1u;
+ break;
+
+ case IOPMP_AMATCH_NAPOT:
+ iopmp_decode_napot(this_addr, &sa, &ea);
+ break;
+
+ default:
+ sa = 0u;
+ ea = 0u;
+ break;
+ }
+
+ s->entry_addr[entry_index].sa = sa;
+ s->entry_addr[entry_index].ea = ea;
+ iopmp_iommu_notify(s);
+}
+
+static uint64_t iopmp_read(void *opaque, hwaddr addr, unsigned size)
+{
+ IopmpState *s = IOPMP(opaque);
+ uint32_t rz = 0;
+ uint32_t offset, idx;
+
+ switch (addr) {
+ case A_VERSION:
+ rz = VENDER_VIRT << R_VERSION_VENDOR_SHIFT |
+ SPECVER_0_9_1 << R_VERSION_SPECVER_SHIFT;
+ break;
+ case A_IMP:
+ rz = IMPID_0_9_1;
+ break;
+ case A_HWCFG0:
+ rz = s->model << R_HWCFG0_MODEL_SHIFT |
+ 1 << R_HWCFG0_TOR_EN_SHIFT |
+ 0 << R_HWCFG0_SPS_EN_SHIFT |
+ 0 << R_HWCFG0_USER_CFG_EN_SHIFT |
+ s->prient_prog << R_HWCFG0_PRIENT_PROG_SHIFT |
+ 0 << R_HWCFG0_RRID_TRANSL_EN_SHIFT |
+ 0 << R_HWCFG0_RRID_TRANSL_PROG_SHIFT |
+ 1 << R_HWCFG0_CHK_X_SHIFT |
+ 0 << R_HWCFG0_NO_X_SHIFT |
+ 0 << R_HWCFG0_NO_W_SHIFT |
+ 0 << R_HWCFG0_STALL_EN_SHIFT |
+ 0 << R_HWCFG0_PEIS_SHIFT |
+ 0 << R_HWCFG0_PEES_SHIFT |
+ 0 << R_HWCFG0_MFR_EN_SHIFT |
+ s->md_num << R_HWCFG0_MD_NUM_SHIFT |
+ s->enable << R_HWCFG0_ENABLE_SHIFT ;
+ break;
+ case A_HWCFG1:
+ rz = s->rrid_num << R_HWCFG1_RRID_NUM_SHIFT |
+ s->entry_num << R_HWCFG1_ENTRY_NUM_SHIFT;
+ break;
+ case A_HWCFG2:
+ rz = s->prio_entry << R_HWCFG2_PRIO_ENTRY_SHIFT;
+ break;
+ case A_ENTRYOFFSET:
+ rz = s->entry_offset;
+ break;
+ case A_ERR_CFG:
+ rz = s->regs.err_cfg;
+ break;
+ case A_MDLCK:
+ rz = s->regs.mdlck;
+ break;
+ case A_MDLCKH:
+ rz = s->regs.mdlckh;
+ break;
+ case A_MDCFGLCK:
+ rz = s->regs.mdcfglck;
+ break;
+ case A_ENTRYLCK:
+ rz = s->regs.entrylck;
+ break;
+ case A_ERR_REQADDR:
+ rz = s->regs.err_reqaddr & UINT32_MAX;
+ break;
+ case A_ERR_REQADDRH:
+ rz = s->regs.err_reqaddr >> 32;
+ break;
+ case A_ERR_REQID:
+ rz = s->regs.err_reqid;
+ break;
+ case A_ERR_REQINFO:
+ rz = s->regs.err_reqinfo;
+ break;
+
+ default:
+ if (addr >= A_MDCFG0 &&
+ addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
+ offset = addr - A_MDCFG0;
+ idx = offset >> 2;
+ if (idx == 0 && offset == 0) {
+ rz = s->regs.mdcfg[idx];
+ } else {
+ /* Only MDCFG0 is implemented in rapid-k model */
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
+ __func__, (int)addr);
+ }
+ } else if (addr >= A_SRCMD_EN0 &&
+ addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
+ offset = addr - A_SRCMD_EN0;
+ idx = offset >> 5;
+ offset &= 0x1f;
+
+ switch (offset) {
+ case SRCMD_EN_OFFSET:
+ rz = s->regs.srcmd_en[idx];
+ break;
+ case SRCMD_ENH_OFFSET:
+ rz = s->regs.srcmd_enh[idx];
+ break;
+ default:
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
+ __func__, (int)addr);
+ break;
+ }
+ } else if (addr >= s->entry_offset &&
+ addr < s->entry_offset + ENTRY_USER_CFG_OFFSET +
+ 16 * (s->entry_num - 1)) {
+ offset = addr - s->entry_offset;
+ idx = offset >> 4;
+ offset &= 0xf;
+
+ switch (offset) {
+ case ENTRY_ADDR_OFFSET:
+ rz = s->regs.entry[idx].addr_reg;
+ break;
+ case ENTRY_ADDRH_OFFSET:
+ rz = s->regs.entry[idx].addrh_reg;
+ break;
+ case ENTRY_CFG_OFFSET:
+ rz = s->regs.entry[idx].cfg_reg;
+ break;
+ case ENTRY_USER_CFG_OFFSET:
+ /* Does not support user customized permission */
+ rz = 0;
+ break;
+ default:
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
+ __func__, (int)addr);
+ break;
+ }
+ } else {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
+ __func__, (int)addr);
+ }
+ break;
+ }
+ trace_iopmp_read(addr, rz);
+ return rz;
+}
+
+static void
+iopmp_write(void *opaque, hwaddr addr, uint64_t value, unsigned size)
+{
+ IopmpState *s = IOPMP(opaque);
+ uint32_t offset, idx;
+ uint32_t value32 = value;
+
+ trace_iopmp_write(addr, value32);
+
+ switch (addr) {
+ case A_VERSION: /* RO */
+ break;
+ case A_IMP: /* RO */
+ break;
+ case A_HWCFG0:
+ if (FIELD_EX32(value32, HWCFG0, PRIENT_PROG)) {
+ /* W1C */
+ s->prient_prog = 0;
+ }
+ if (FIELD_EX32(value32, HWCFG0, ENABLE)) {
+ /* W1S */
+ s->enable = 1;
+ iopmp_iommu_notify(s);
+ }
+ break;
+ case A_HWCFG1: /* RO */
+ break;
+ case A_HWCFG2:
+ if (s->prient_prog) {
+ s->prio_entry = FIELD_EX32(value32, HWCFG2, PRIO_ENTRY);
+ }
+ break;
+ case A_ERR_CFG:
+ if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) {
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, L,
+ FIELD_EX32(value32, ERR_CFG, L));
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IE,
+ FIELD_EX32(value32, ERR_CFG, IE));
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IRE,
+ FIELD_EX32(value32, ERR_CFG, IRE));
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RRE,
+ FIELD_EX32(value32, ERR_CFG, RRE));
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IWE,
+ FIELD_EX32(value32, ERR_CFG, IWE));
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RWE,
+ FIELD_EX32(value32, ERR_CFG, RWE));
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IXE,
+ FIELD_EX32(value32, ERR_CFG, IXE));
+ s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RXE,
+ FIELD_EX32(value32, ERR_CFG, RXE));
+ }
+ break;
+ case A_MDLCK:
+ if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
+ s->regs.mdlck = value32;
+ }
+ break;
+ case A_MDLCKH:
+ if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
+ s->regs.mdlckh = value32;
+ }
+ break;
+ case A_MDCFGLCK:
+ if (!FIELD_EX32(s->regs.mdcfglck, MDCFGLCK, L)) {
+ s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F,
+ FIELD_EX32(value32, MDCFGLCK, F));
+ s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L,
+ FIELD_EX32(value32, MDCFGLCK, L));
+ }
+ break;
+ case A_ENTRYLCK:
+ if (!(FIELD_EX32(s->regs.entrylck, ENTRYLCK, L))) {
+ s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, F,
+ FIELD_EX32(value32, ENTRYLCK, F));
+ s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, L,
+ FIELD_EX32(value32, ENTRYLCK, L));
+ }
+ case A_ERR_REQADDR: /* RO */
+ break;
+ case A_ERR_REQADDRH: /* RO */
+ break;
+ case A_ERR_REQID: /* RO */
+ break;
+ case A_ERR_REQINFO:
+ if (FIELD_EX32(value32, ERR_REQINFO, V)) {
+ s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo,
+ ERR_REQINFO, V, 0);
+ qemu_set_irq(s->irq, 0);
+ }
+ break;
+
+ default:
+ if (addr >= A_MDCFG0 &&
+ addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
+ offset = addr - A_MDCFG0;
+ idx = offset >> 2;
+ /* RO in rapid-k model */
+ if (idx > 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
+ __func__, (int)addr);
+ }
+ } else if (addr >= A_SRCMD_EN0 &&
+ addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
+ offset = addr - A_SRCMD_EN0;
+ idx = offset >> 5;
+ offset &= 0x1f;
+
+ if (offset % 4) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
+ __func__, (int)addr);
+ } else if (FIELD_EX32(s->regs.srcmd_en[idx], SRCMD_EN0, L)
+ == 0) {
+ switch (offset) {
+ case SRCMD_EN_OFFSET:
+ s->regs.srcmd_en[idx] =
+ FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, L,
+ FIELD_EX32(value32, SRCMD_EN0, L));
+
+ /* MD field is protected by mdlck */
+ value32 = (value32 & ~s->regs.mdlck) |
+ (s->regs.srcmd_en[idx] & s->regs.mdlck);
+ s->regs.srcmd_en[idx] =
+ FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, MD,
+ FIELD_EX32(value32, SRCMD_EN0, MD));
+ break;
+ case SRCMD_ENH_OFFSET:
+ value32 = (value32 & ~s->regs.mdlckh) |
+ (s->regs.srcmd_enh[idx] & s->regs.mdlckh);
+ s->regs.srcmd_enh[idx] =
+ FIELD_DP32(s->regs.srcmd_enh[idx], SRCMD_ENH0, MDH,
+ value32);
+ break;
+ default:
+ break;
+ }
+ }
+ } else if (addr >= s->entry_offset &&
+ addr < s->entry_offset + ENTRY_USER_CFG_OFFSET
+ + 16 * (s->entry_num - 1)) {
+ offset = addr - s->entry_offset;
+ idx = offset >> 4;
+ offset &= 0xf;
+
+ /* index < ENTRYLCK_F is protected */
+ if (idx >= FIELD_EX32(s->regs.entrylck, ENTRYLCK, F)) {
+ switch (offset) {
+ case ENTRY_ADDR_OFFSET:
+ s->regs.entry[idx].addr_reg = value32;
+ break;
+ case ENTRY_ADDRH_OFFSET:
+ s->regs.entry[idx].addrh_reg = value32;
+ break;
+ case ENTRY_CFG_OFFSET:
+ s->regs.entry[idx].cfg_reg = value32;
+ break;
+ case ENTRY_USER_CFG_OFFSET:
+ /* Does not support user customized permission */
+ break;
+ default:
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
+ __func__, (int)addr);
+ break;
+ }
+ iopmp_update_rule(s, idx);
+ if (idx + 1 < s->entry_num &&
+ FIELD_EX32(s->regs.entry[idx + 1].cfg_reg, ENTRY_CFG, A) ==
+ IOPMP_AMATCH_TOR) {
+ iopmp_update_rule(s, idx + 1);
+ }
+ }
+ } else {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", __func__,
+ (int)addr);
+ }
+ }
+}
+
+/* Match entry in memory domain */
+static int match_entry_md(IopmpState *s, int md_idx, hwaddr start_addr,
+ hwaddr end_addr, int *entry_idx,
+ int *prior_entry_in_tlb)
+{
+ int entry_idx_s, entry_idx_e;
+ int result = ENTRY_NO_HIT;
+ int i = 0;
+ hwaddr tlb_sa = start_addr & ~(TARGET_PAGE_SIZE - 1);
+ hwaddr tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
+
+ entry_idx_s = md_idx * s->regs.mdcfg[0];
+ entry_idx_e = (md_idx + 1) * s->regs.mdcfg[0];
+
+ if (entry_idx_s >= s->entry_num) {
+ return result;
+ }
+ if (entry_idx_e > s->entry_num) {
+ entry_idx_e = s->entry_num;
+ }
+ i = entry_idx_s;
+ for (i = entry_idx_s; i < entry_idx_e; i++) {
+ if (FIELD_EX32(s->regs.entry[i].cfg_reg, ENTRY_CFG, A) ==
+ IOPMP_AMATCH_OFF) {
+ continue;
+ }
+ if (start_addr >= s->entry_addr[i].sa &&
+ start_addr <= s->entry_addr[i].ea) {
+ /* Check end address */
+ if (end_addr >= s->entry_addr[i].sa &&
+ end_addr <= s->entry_addr[i].ea) {
+ *entry_idx = i;
+ return ENTRY_HIT;
+ } else if (i >= s->prio_entry) {
+ /* Continue for non-prio_entry */
+ continue;
+ } else {
+ *entry_idx = i;
+ return ENTRY_PAR_HIT;
+ }
+ } else if (end_addr >= s->entry_addr[i].sa &&
+ end_addr <= s->entry_addr[i].ea) {
+ /* Only end address matches the entry */
+ if (i >= s->prio_entry) {
+ continue;
+ } else {
+ *entry_idx = i;
+ return ENTRY_PAR_HIT;
+ }
+ } else if (start_addr < s->entry_addr[i].sa &&
+ end_addr > s->entry_addr[i].ea) {
+ if (i >= s->prio_entry) {
+ continue;
+ } else {
+ *entry_idx = i;
+ return ENTRY_PAR_HIT;
+ }
+ }
+ if (prior_entry_in_tlb != NULL) {
+ if ((s->entry_addr[i].sa >= tlb_sa &&
+ s->entry_addr[i].sa <= tlb_ea) ||
+ (s->entry_addr[i].ea >= tlb_sa &&
+ s->entry_addr[i].ea <= tlb_ea)) {
+ /*
+ * TLB should not use the cached result when the tlb contains
+ * higher priority entry
+ */
+ *prior_entry_in_tlb = 1;
+ }
+ }
+ }
+ return result;
+}
+
+static int match_entry(IopmpState *s, int rrid, hwaddr start_addr,
+ hwaddr end_addr, int *match_md_idx,
+ int *match_entry_idx, int *prior_entry_in_tlb)
+{
+ int cur_result = ENTRY_NO_HIT;
+ int result = ENTRY_NO_HIT;
+ /* Remove lock bit */
+ uint64_t srcmd_en = ((uint64_t)s->regs.srcmd_en[rrid] |
+ ((uint64_t)s->regs.srcmd_enh[rrid] << 32)) >> 1;
+
+ for (int md_idx = 0; md_idx < s->md_num; md_idx++) {
+ if (srcmd_en & (1ULL << md_idx)) {
+ cur_result = match_entry_md(s, md_idx, start_addr, end_addr,
+ match_entry_idx, prior_entry_in_tlb);
+ if (cur_result == ENTRY_HIT || cur_result == ENTRY_PAR_HIT) {
+ *match_md_idx = md_idx;
+ return cur_result;
+ }
+ }
+ }
+ return result;
+}
+
+static void iopmp_error_reaction(IopmpState *s, uint32_t id, hwaddr start,
+ uint32_t info)
+{
+ if (!FIELD_EX32(s->regs.err_reqinfo, ERR_REQINFO, V)) {
+ s->regs.err_reqinfo = info;
+ s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo, ERR_REQINFO, V,
+ 1);
+ s->regs.err_reqid = id;
+ /* addr[LEN+2:2] */
+ s->regs.err_reqaddr = start >> 2;
+
+ if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_READ &&
+ FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
+ FIELD_EX32(s->regs.err_cfg, ERR_CFG, IRE)) {
+ qemu_set_irq(s->irq, 1);
+ }
+ if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_WRITE &&
+ FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
+ FIELD_EX32(s->regs.err_cfg, ERR_CFG, IWE)) {
+ qemu_set_irq(s->irq, 1);
+ }
+ if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_FETCH &&
+ FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
+ FIELD_EX32(s->regs.err_cfg, ERR_CFG, IXE)) {
+ qemu_set_irq(s->irq, 1);
+ }
+ }
+}
+
+static IOMMUTLBEntry iopmp_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
+ IOMMUAccessFlags flags, int iommu_idx)
+{
+ int rrid = iommu_idx;
+ IopmpState *s = IOPMP(container_of(iommu, IopmpState, iommu));
+ hwaddr start_addr, end_addr;
+ int entry_idx = -1;
+ int md_idx = -1;
+ int result;
+ uint32_t error_info = 0;
+ uint32_t error_id = 0;
+ int prior_entry_in_tlb = 0;
+ iopmp_permission iopmp_perm;
+ IOMMUTLBEntry entry = {
+ .target_as = &s->downstream_as,
+ .iova = addr,
+ .translated_addr = addr,
+ .addr_mask = 0,
+ .perm = IOMMU_NONE,
+ };
+
+ if (!s->enable) {
+ /* Bypass IOPMP */
+ entry.addr_mask = -1ULL,
+ entry.perm = IOMMU_RW;
+ return entry;
+ }
+
+ /* unknown RRID */
+ if (rrid >= s->rrid_num) {
+ error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
+ ERR_REQINFO_ETYPE_RRID);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
+ iopmp_error_reaction(s, error_id, addr, error_info);
+ entry.target_as = &s->blocked_rwx_as;
+ entry.perm = IOMMU_RW;
+ return entry;
+ }
+
+ if (s->transaction_state[rrid].supported == true) {
+ start_addr = s->transaction_state[rrid].start_addr;
+ end_addr = s->transaction_state[rrid].end_addr;
+ } else {
+ /* No transaction information, use the same address */
+ start_addr = addr;
+ end_addr = addr;
+ }
+
+ result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
+ &prior_entry_in_tlb);
+ if (result == ENTRY_HIT) {
+ entry.addr_mask = s->entry_addr[entry_idx].ea -
+ s->entry_addr[entry_idx].sa;
+ if (prior_entry_in_tlb) {
+ /* Make TLB repeat iommu translation on every access */
+ entry.addr_mask = 0;
+ }
+ iopmp_perm = s->regs.entry[entry_idx].cfg_reg & IOPMP_RWX;
+ if (flags) {
+ if ((iopmp_perm & flags) == 0) {
+ /* Permission denied */
+ error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
+ error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
+ ERR_REQINFO_ETYPE_READ + flags - 1);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
+ iopmp_error_reaction(s, error_id, start_addr, error_info);
+ entry.target_as = &s->blocked_rwx_as;
+ entry.perm = IOMMU_RW;
+ } else {
+ entry.target_as = &s->downstream_as;
+ entry.perm = iopmp_perm;
+ }
+ } else {
+ /* CPU access with IOMMU_NONE flag */
+ if (iopmp_perm & IOPMP_XO) {
+ if ((iopmp_perm & IOPMP_RW) == IOPMP_RW) {
+ entry.target_as = &s->downstream_as;
+ } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
+ entry.target_as = &s->blocked_w_as;
+ } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
+ entry.target_as = &s->blocked_r_as;
+ } else {
+ entry.target_as = &s->blocked_rw_as;
+ }
+ } else {
+ if ((iopmp_perm & IOPMP_RW) == IOMMU_RW) {
+ entry.target_as = &s->blocked_x_as;
+ } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
+ entry.target_as = &s->blocked_wx_as;
+ } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
+ entry.target_as = &s->blocked_rx_as;
+ } else {
+ entry.target_as = &s->blocked_rwx_as;
+ }
+ }
+ entry.perm = IOMMU_RW;
+ }
+ } else {
+ if (flags) {
+ if (result == ENTRY_PAR_HIT) {
+ error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
+ error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
+ ERR_REQINFO_ETYPE_PARHIT);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
+ iopmp_error_reaction(s, error_id, start_addr, error_info);
+ } else {
+ error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
+ ERR_REQINFO_ETYPE_NOHIT);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
+ iopmp_error_reaction(s, error_id, start_addr, error_info);
+ }
+ }
+ /* CPU access with IOMMU_NONE flag no_hit or par_hit */
+ entry.target_as = &s->blocked_rwx_as;
+ entry.perm = IOMMU_RW;
+ }
+ return entry;
+}
+
+static const MemoryRegionOps iopmp_ops = {
+ .read = iopmp_read,
+ .write = iopmp_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 4, .max_access_size = 4}
+};
+
+static MemTxResult iopmp_permssion_write(void *opaque, hwaddr addr,
+ uint64_t value, unsigned size,
+ MemTxAttrs attrs)
+{
+ IopmpState *s = IOPMP(opaque);
+ return address_space_write(&s->downstream_as, addr, attrs, &value, size);
+}
+
+static MemTxResult iopmp_permssion_read(void *opaque, hwaddr addr,
+ uint64_t *pdata, unsigned size,
+ MemTxAttrs attrs)
+{
+ IopmpState *s = IOPMP(opaque);
+ return address_space_read(&s->downstream_as, addr, attrs, pdata, size);
+}
+
+static MemTxResult iopmp_handle_block(void *opaque, hwaddr addr,
+ uint64_t *data, unsigned size,
+ MemTxAttrs attrs,
+ iopmp_access_type access_type) {
+ IopmpState *s = IOPMP(opaque);
+ int md_idx, entry_idx;
+ uint32_t error_info = 0;
+ uint32_t error_id = 0;
+ int rrid = attrs.requester_id;
+ int result;
+ hwaddr start_addr, end_addr;
+ start_addr = addr;
+ end_addr = addr;
+ result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
+ NULL);
+
+ if (result == ENTRY_HIT) {
+ error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
+ error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
+ access_type);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
+ iopmp_error_reaction(s, error_id, start_addr, error_info);
+ } else if (result == ENTRY_PAR_HIT) {
+ error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
+ error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
+ ERR_REQINFO_ETYPE_PARHIT);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE,
+ access_type);
+ iopmp_error_reaction(s, error_id, start_addr, error_info);
+ } else {
+ error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
+ ERR_REQINFO_ETYPE_NOHIT);
+ error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
+ iopmp_error_reaction(s, error_id, start_addr, error_info);
+ }
+
+ if (access_type == IOPMP_ACCESS_READ) {
+
+ switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RRE)) {
+ case RRE_ERROR:
+ return MEMTX_ERROR;
+ break;
+ case RRE_SUCCESS_VALUE:
+ *data = s->fabricated_v;
+ return MEMTX_OK;
+ break;
+ default:
+ break;
+ }
+ return MEMTX_OK;
+ } else if (access_type == IOPMP_ACCESS_WRITE) {
+
+ switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RWE)) {
+ case RWE_ERROR:
+ return MEMTX_ERROR;
+ break;
+ case RWE_SUCCESS:
+ return MEMTX_OK;
+ break;
+ default:
+ break;
+ }
+ return MEMTX_OK;
+ } else {
+
+ switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RXE)) {
+ case RXE_ERROR:
+ return MEMTX_ERROR;
+ break;
+ case RXE_SUCCESS_VALUE:
+ *data = s->fabricated_v;
+ return MEMTX_OK;
+ break;
+ default:
+ break;
+ }
+ return MEMTX_OK;
+ }
+ return MEMTX_OK;
+}
+
+static MemTxResult iopmp_block_write(void *opaque, hwaddr addr, uint64_t value,
+ unsigned size, MemTxAttrs attrs)
+{
+ return iopmp_handle_block(opaque, addr, &value, size, attrs,
+ IOPMP_ACCESS_WRITE);
+}
+
+static MemTxResult iopmp_block_read(void *opaque, hwaddr addr, uint64_t *pdata,
+ unsigned size, MemTxAttrs attrs)
+{
+ return iopmp_handle_block(opaque, addr, pdata, size, attrs,
+ IOPMP_ACCESS_READ);
+}
+
+static MemTxResult iopmp_block_fetch(void *opaque, hwaddr addr, uint64_t *pdata,
+ unsigned size, MemTxAttrs attrs)
+{
+ return iopmp_handle_block(opaque, addr, pdata, size, attrs,
+ IOPMP_ACCESS_FETCH);
+}
+
+static const MemoryRegionOps iopmp_block_rw_ops = {
+ .fetch_with_attrs = iopmp_permssion_read,
+ .read_with_attrs = iopmp_block_read,
+ .write_with_attrs = iopmp_block_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 1, .max_access_size = 8},
+};
+
+static const MemoryRegionOps iopmp_block_w_ops = {
+ .fetch_with_attrs = iopmp_permssion_read,
+ .read_with_attrs = iopmp_permssion_read,
+ .write_with_attrs = iopmp_block_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 1, .max_access_size = 8},
+};
+
+static const MemoryRegionOps iopmp_block_r_ops = {
+ .fetch_with_attrs = iopmp_permssion_read,
+ .read_with_attrs = iopmp_block_read,
+ .write_with_attrs = iopmp_permssion_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 1, .max_access_size = 8},
+};
+
+static const MemoryRegionOps iopmp_block_rwx_ops = {
+ .fetch_with_attrs = iopmp_block_fetch,
+ .read_with_attrs = iopmp_block_read,
+ .write_with_attrs = iopmp_block_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 1, .max_access_size = 8},
+};
+
+static const MemoryRegionOps iopmp_block_wx_ops = {
+ .fetch_with_attrs = iopmp_block_fetch,
+ .read_with_attrs = iopmp_permssion_read,
+ .write_with_attrs = iopmp_block_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 1, .max_access_size = 8},
+};
+
+static const MemoryRegionOps iopmp_block_rx_ops = {
+ .fetch_with_attrs = iopmp_block_fetch,
+ .read_with_attrs = iopmp_block_read,
+ .write_with_attrs = iopmp_permssion_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 1, .max_access_size = 8},
+};
+
+static const MemoryRegionOps iopmp_block_x_ops = {
+ .fetch_with_attrs = iopmp_block_fetch,
+ .read_with_attrs = iopmp_permssion_read,
+ .write_with_attrs = iopmp_permssion_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .valid = {.min_access_size = 1, .max_access_size = 8},
+};
+
+static void iopmp_realize(DeviceState *dev, Error **errp)
+{
+ Object *obj = OBJECT(dev);
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
+ IopmpState *s = IOPMP(dev);
+ uint64_t size;
+
+ size = -1ULL;
+ s->model = IOPMP_MODEL_RAPIDK;
+ s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
+ s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
+
+ s->prient_prog = s->default_prient_prog;
+ s->rrid_num = MIN(s->rrid_num, IOPMP_MAX_RRID_NUM);
+ s->md_num = MIN(s->md_num, IOPMP_MAX_MD_NUM);
+ s->entry_num = s->md_num * s->k;
+ s->prio_entry = MIN(s->prio_entry, s->entry_num);
+
+ s->regs.mdcfg = g_malloc0(s->md_num * sizeof(uint32_t));
+ s->regs.mdcfg[0] = s->k;
+
+ s->regs.srcmd_en = g_malloc0(s->rrid_num * sizeof(uint32_t));
+ s->regs.srcmd_enh = g_malloc0(s->rrid_num * sizeof(uint32_t));
+ s->regs.entry = g_malloc0(s->entry_num * sizeof(iopmp_entry_t));
+ s->entry_addr = g_malloc0(s->entry_num * sizeof(iopmp_addr_t));
+ s->transaction_state = g_malloc0(s->rrid_num *
+ sizeof(iopmp_transaction_state));
+ qemu_mutex_init(&s->iopmp_transaction_mutex);
+
+ memory_region_init_iommu(&s->iommu, sizeof(s->iommu),
+ TYPE_IOPMP_IOMMU_MEMORY_REGION,
+ obj, "riscv-iopmp-sysbus-iommu", UINT64_MAX);
+ memory_region_init_io(&s->mmio, obj, &iopmp_ops,
+ s, "iopmp-regs", 0x100000);
+ sysbus_init_mmio(sbd, &s->mmio);
+
+ memory_region_init_io(&s->blocked_rw, NULL, &iopmp_block_rw_ops,
+ s, "iopmp-blocked-rw", size);
+ memory_region_init_io(&s->blocked_w, NULL, &iopmp_block_w_ops,
+ s, "iopmp-blocked-w", size);
+ memory_region_init_io(&s->blocked_r, NULL, &iopmp_block_r_ops,
+ s, "iopmp-blocked-r", size);
+
+ memory_region_init_io(&s->blocked_rwx, NULL, &iopmp_block_rwx_ops,
+ s, "iopmp-blocked-rwx", size);
+ memory_region_init_io(&s->blocked_wx, NULL, &iopmp_block_wx_ops,
+ s, "iopmp-blocked-wx", size);
+ memory_region_init_io(&s->blocked_rx, NULL, &iopmp_block_rx_ops,
+ s, "iopmp-blocked-rx", size);
+ memory_region_init_io(&s->blocked_x, NULL, &iopmp_block_x_ops,
+ s, "iopmp-blocked-x", size);
+ address_space_init(&s->blocked_rw_as, &s->blocked_rw,
+ "iopmp-blocked-rw-as");
+ address_space_init(&s->blocked_w_as, &s->blocked_w,
+ "iopmp-blocked-w-as");
+ address_space_init(&s->blocked_r_as, &s->blocked_r,
+ "iopmp-blocked-r-as");
+
+ address_space_init(&s->blocked_rwx_as, &s->blocked_rwx,
+ "iopmp-blocked-rwx-as");
+ address_space_init(&s->blocked_wx_as, &s->blocked_wx,
+ "iopmp-blocked-wx-as");
+ address_space_init(&s->blocked_rx_as, &s->blocked_rx,
+ "iopmp-blocked-rx-as");
+ address_space_init(&s->blocked_x_as, &s->blocked_x,
+ "iopmp-blocked-x-as");
+}
+
+static void iopmp_reset(DeviceState *dev)
+{
+ IopmpState *s = IOPMP(dev);
+
+ qemu_set_irq(s->irq, 0);
+ memset(s->regs.srcmd_en, 0, s->rrid_num * sizeof(uint32_t));
+ memset(s->regs.srcmd_enh, 0, s->rrid_num * sizeof(uint32_t));
+ memset(s->entry_addr, 0, s->entry_num * sizeof(iopmp_addr_t));
+
+ s->regs.mdlck = 0;
+ s->regs.mdlckh = 0;
+ s->regs.entrylck = 0;
+ s->regs.mdstall = 0;
+ s->regs.mdstallh = 0;
+ s->regs.rridscp = 0;
+ s->regs.err_cfg = 0;
+ s->regs.err_reqaddr = 0;
+ s->regs.err_reqid = 0;
+ s->regs.err_reqinfo = 0;
+
+ s->prient_prog = s->default_prient_prog;
+ s->enable = 0;
+
+ s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
+ s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
+ s->regs.mdcfg[0] = s->k;
+}
+
+static int iopmp_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
+{
+ return attrs.requester_id;
+}
+
+static void iopmp_iommu_memory_region_class_init(ObjectClass *klass, void *data)
+{
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
+
+ imrc->translate = iopmp_translate;
+ imrc->attrs_to_index = iopmp_attrs_to_index;
+}
+
+static Property iopmp_property[] = {
+ DEFINE_PROP_BOOL("prient_prog", IopmpState, default_prient_prog, true),
+ DEFINE_PROP_UINT32("k", IopmpState, k, 6),
+ DEFINE_PROP_UINT32("prio_entry", IopmpState, prio_entry, 48),
+ DEFINE_PROP_UINT32("rrid_num", IopmpState, rrid_num, 16),
+ DEFINE_PROP_UINT32("md_num", IopmpState, md_num, 8),
+ DEFINE_PROP_UINT32("entry_offset", IopmpState, entry_offset, 0x4000),
+ DEFINE_PROP_UINT32("fabricated_v", IopmpState, fabricated_v, 0x0),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void iopmp_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ device_class_set_props(dc, iopmp_property);
+ dc->realize = iopmp_realize;
+ dc->reset = iopmp_reset;
+}
+
+static void iopmp_init(Object *obj)
+{
+ IopmpState *s = IOPMP(obj);
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
+
+ sysbus_init_irq(sbd, &s->irq);
+}
+
+static const TypeInfo iopmp_info = {
+ .name = TYPE_IOPMP,
+ .parent = TYPE_SYS_BUS_DEVICE,
+ .instance_size = sizeof(IopmpState),
+ .instance_init = iopmp_init,
+ .class_init = iopmp_class_init,
+};
+
+static const TypeInfo
+iopmp_iommu_memory_region_info = {
+ .name = TYPE_IOPMP_IOMMU_MEMORY_REGION,
+ .parent = TYPE_IOMMU_MEMORY_REGION,
+ .class_init = iopmp_iommu_memory_region_class_init,
+};
+
+static void
+iopmp_register_types(void)
+{
+ type_register_static(&iopmp_info);
+ type_register_static(&iopmp_iommu_memory_region_info);
+}
+
+type_init(iopmp_register_types);
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
index 1be0717c0c..c148166d2d 100644
--- a/hw/misc/trace-events
+++ b/hw/misc/trace-events
@@ -362,3 +362,6 @@ aspeed_sli_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx
aspeed_sliio_write(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
aspeed_sliio_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
+# riscv_iopmp.c
+iopmp_read(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
+iopmp_write(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
new file mode 100644
index 0000000000..b8fe479108
--- /dev/null
+++ b/include/hw/misc/riscv_iopmp.h
@@ -0,0 +1,168 @@
+/*
+ * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
+ *
+ * Copyright (c) 2023-2024 Andes Tech. Corp.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef RISCV_IOPMP_H
+#define RISCV_IOPMP_H
+
+#include "hw/sysbus.h"
+#include "qemu/typedefs.h"
+#include "memory.h"
+#include "exec/hwaddr.h"
+
+#define TYPE_IOPMP "iopmp"
+#define IOPMP(obj) OBJECT_CHECK(IopmpState, (obj), TYPE_IOPMP)
+
+#define IOPMP_MAX_MD_NUM 63
+#define IOPMP_MAX_RRID_NUM 65535
+#define IOPMP_MAX_ENTRY_NUM 65535
+
+#define VENDER_VIRT 0
+#define SPECVER_0_9_1 91
+#define IMPID_0_9_1 91
+
+#define RRE_ERROR 0
+#define RRE_SUCCESS_VALUE 1
+
+#define RWE_ERROR 0
+#define RWE_SUCCESS 1
+
+#define RXE_ERROR 0
+#define RXE_SUCCESS_VALUE 1
+
+#define ERR_REQINFO_TTYPE_READ 1
+#define ERR_REQINFO_TTYPE_WRITE 2
+#define ERR_REQINFO_TTYPE_FETCH 3
+#define ERR_REQINFO_ETYPE_NOERROR 0
+#define ERR_REQINFO_ETYPE_READ 1
+#define ERR_REQINFO_ETYPE_WRITE 2
+#define ERR_REQINFO_ETYPE_FETCH 3
+#define ERR_REQINFO_ETYPE_PARHIT 4
+#define ERR_REQINFO_ETYPE_NOHIT 5
+#define ERR_REQINFO_ETYPE_RRID 6
+#define ERR_REQINFO_ETYPE_USER 7
+
+#define IOPMP_MODEL_FULL 0
+#define IOPMP_MODEL_RAPIDK 0x1
+#define IOPMP_MODEL_DYNAMICK 0x2
+#define IOPMP_MODEL_ISOLATION 0x3
+#define IOPMP_MODEL_COMPACTK 0x4
+
+#define ENTRY_NO_HIT 0
+#define ENTRY_PAR_HIT 1
+#define ENTRY_HIT 2
+
+/* The generic iopmp address space which downstream is system memory */
+extern AddressSpace iopmp_container_as;
+
+typedef enum {
+ IOPMP_AMATCH_OFF, /* Null (off) */
+ IOPMP_AMATCH_TOR, /* Top of Range */
+ IOPMP_AMATCH_NA4, /* Naturally aligned four-byte region */
+ IOPMP_AMATCH_NAPOT /* Naturally aligned power-of-two region */
+} iopmp_am_t;
+
+typedef enum {
+ IOPMP_ACCESS_READ = 1,
+ IOPMP_ACCESS_WRITE = 2,
+ IOPMP_ACCESS_FETCH = 3
+} iopmp_access_type;
+
+typedef enum {
+ IOPMP_NONE = 0,
+ IOPMP_RO = 1,
+ IOPMP_WO = 2,
+ IOPMP_RW = 3,
+ IOPMP_XO = 4,
+ IOPMP_RX = 5,
+ IOPMP_WX = 6,
+ IOPMP_RWX = 7,
+} iopmp_permission;
+
+typedef struct {
+ uint32_t addr_reg;
+ uint32_t addrh_reg;
+ uint32_t cfg_reg;
+} iopmp_entry_t;
+
+typedef struct {
+ uint64_t sa;
+ uint64_t ea;
+} iopmp_addr_t;
+
+typedef struct {
+ uint32_t *srcmd_en;
+ uint32_t *srcmd_enh;
+ uint32_t *mdcfg;
+ iopmp_entry_t *entry;
+ uint32_t mdlck;
+ uint32_t mdlckh;
+ uint32_t entrylck;
+ uint32_t mdcfglck;
+ uint32_t mdstall;
+ uint32_t mdstallh;
+ uint32_t rridscp;
+ uint32_t err_cfg;
+ uint64_t err_reqaddr;
+ uint32_t err_reqid;
+ uint32_t err_reqinfo;
+} iopmp_regs;
+
+
+/* To detect partially hit */
+typedef struct iopmp_transaction_state {
+ bool running;
+ bool supported;
+ hwaddr start_addr;
+ hwaddr end_addr;
+} iopmp_transaction_state;
+
+typedef struct IopmpState {
+ SysBusDevice parent_obj;
+ iopmp_addr_t *entry_addr;
+ MemoryRegion mmio;
+ IOMMUMemoryRegion iommu;
+ IOMMUMemoryRegion *next_iommu;
+ iopmp_regs regs;
+ MemoryRegion *downstream;
+ MemoryRegion blocked_r, blocked_w, blocked_x, blocked_rw, blocked_rx,
+ blocked_wx, blocked_rwx;
+ MemoryRegion stall_io;
+ uint32_t model;
+ uint32_t k;
+ bool prient_prog;
+ bool default_prient_prog;
+ iopmp_transaction_state *transaction_state;
+ QemuMutex iopmp_transaction_mutex;
+
+ AddressSpace downstream_as;
+ AddressSpace blocked_r_as, blocked_w_as, blocked_x_as, blocked_rw_as,
+ blocked_rx_as, blocked_wx_as, blocked_rwx_as;
+ qemu_irq irq;
+ bool enable;
+
+ uint32_t prio_entry;
+ uint32_t rrid_num;
+ uint32_t md_num;
+ uint32_t entry_num;
+ uint32_t entry_offset;
+ uint32_t fabricated_v;
+} IopmpState;
+
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
` (3 preceding siblings ...)
2024-07-15 9:56 ` [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device Ethan Chen via
@ 2024-07-15 10:12 ` Ethan Chen via
2024-08-08 4:23 ` Alistair Francis
2024-07-15 10:14 ` [PATCH v8 6/8] hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support Ethan Chen via
` (3 subsequent siblings)
8 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 10:12 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
To enable system memory transactions through the IOPMP, memory regions must
be moved to the IOPMP downstream and then replaced with IOMMUs for IOPMP
translation.
The iopmp_setup_system_memory() function copies subregions of system memory
to create the IOPMP downstream and then replaces the specified memory
regions in system memory with the IOMMU regions of the IOPMP. It also
adds entries to a protection map that records the relationship between
physical address regions and the IOPMP, which is used by the IOPMP DMA
API to send transaction information.
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
hw/misc/riscv_iopmp.c | 61 +++++++++++++++++++++++++++++++++++
include/hw/misc/riscv_iopmp.h | 3 ++
2 files changed, 64 insertions(+)
diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
index db43e3c73f..e62ac57437 100644
--- a/hw/misc/riscv_iopmp.c
+++ b/hw/misc/riscv_iopmp.c
@@ -1151,4 +1151,65 @@ iopmp_register_types(void)
type_register_static(&iopmp_iommu_memory_region_info);
}
+/*
+ * Copies subregions from the source memory region to the destination memory
+ * region
+ */
+static void copy_memory_subregions(MemoryRegion *src_mr, MemoryRegion *dst_mr)
+{
+ int32_t priority;
+ hwaddr addr;
+ MemoryRegion *alias, *subregion;
+ QTAILQ_FOREACH(subregion, &src_mr->subregions, subregions_link) {
+ priority = subregion->priority;
+ addr = subregion->addr;
+ alias = g_malloc0(sizeof(MemoryRegion));
+ memory_region_init_alias(alias, NULL, subregion->name, subregion, 0,
+ memory_region_size(subregion));
+ memory_region_add_subregion_overlap(dst_mr, addr, alias, priority);
+ }
+}
+
+/*
+ * Create downstream of system memory for IOPMP, and overlap memory region
+ * specified in memmap with IOPMP translator. Make sure subregions are added to
+ * system memory before call this function. It also add entry to
+ * iopmp_protection_memmaps for recording the relationship between physical
+ * address regions and IOPMP.
+ */
+void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
+ uint32_t map_entry_num)
+{
+ IopmpState *s = IOPMP(dev);
+ uint32_t i;
+ MemoryRegion *iommu_alias;
+ MemoryRegion *target_mr = get_system_memory();
+ MemoryRegion *downstream = g_malloc0(sizeof(MemoryRegion));
+ memory_region_init(downstream, NULL, "iopmp_downstream",
+ memory_region_size(target_mr));
+ /* Copy subregions of target to downstream */
+ copy_memory_subregions(target_mr, downstream);
+
+ iopmp_protection_memmap *map;
+ for (i = 0; i < map_entry_num; i++) {
+ /* Memory access to protected regions of target are through IOPMP */
+ iommu_alias = g_new(MemoryRegion, 1);
+ memory_region_init_alias(iommu_alias, NULL, "iommu_alias",
+ MEMORY_REGION(&s->iommu), memmap[i].base,
+ memmap[i].size);
+ memory_region_add_subregion_overlap(target_mr, memmap[i].base,
+ iommu_alias, 1);
+ /* Record which IOPMP is responsible for the region */
+ map = g_new0(iopmp_protection_memmap, 1);
+ map->iopmp_s = s;
+ map->entry.base = memmap[i].base;
+ map->entry.size = memmap[i].size;
+ QLIST_INSERT_HEAD(&iopmp_protection_memmaps, map, list);
+ }
+ s->downstream = downstream;
+ address_space_init(&s->downstream_as, s->downstream,
+ "iopmp-downstream-as");
+}
+
+
type_init(iopmp_register_types);
diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
index b8fe479108..ebe9c4bc4a 100644
--- a/include/hw/misc/riscv_iopmp.h
+++ b/include/hw/misc/riscv_iopmp.h
@@ -165,4 +165,7 @@ typedef struct IopmpState {
uint32_t fabricated_v;
} IopmpState;
+void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
+ uint32_t mapentry_num);
+
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 6/8] hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
` (4 preceding siblings ...)
2024-07-15 10:12 ` [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory Ethan Chen via
@ 2024-07-15 10:14 ` Ethan Chen via
2024-08-08 4:25 ` Alistair Francis
2024-07-15 10:14 ` [PATCH v8 7/8] hw/misc/riscv_iopmp: Add DMA operation with IOPMP support API Ethan Chen via
` (2 subsequent siblings)
8 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 10:14 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
The iopmp_setup_cpu() function configures the RISCV CPU to support IOPMP and
specifies the CPU's RRID.
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
hw/misc/riscv_iopmp.c | 6 ++++++
include/hw/misc/riscv_iopmp.h | 1 +
2 files changed, 7 insertions(+)
diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
index e62ac57437..374bf5c610 100644
--- a/hw/misc/riscv_iopmp.c
+++ b/hw/misc/riscv_iopmp.c
@@ -1211,5 +1211,11 @@ void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
"iopmp-downstream-as");
}
+void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid)
+{
+ cpu->cfg.iopmp = true;
+ cpu->cfg.iopmp_rrid = rrid;
+}
+
type_init(iopmp_register_types);
diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
index ebe9c4bc4a..7e7da56d10 100644
--- a/include/hw/misc/riscv_iopmp.h
+++ b/include/hw/misc/riscv_iopmp.h
@@ -167,5 +167,6 @@ typedef struct IopmpState {
void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
uint32_t mapentry_num);
+void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid);
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 7/8] hw/misc/riscv_iopmp: Add DMA operation with IOPMP support API
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
` (5 preceding siblings ...)
2024-07-15 10:14 ` [PATCH v8 6/8] hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support Ethan Chen via
@ 2024-07-15 10:14 ` Ethan Chen via
2024-07-15 10:14 ` [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support Ethan Chen via
2024-11-05 18:36 ` [PATCH v8 0/8] Support RISC-V IOPMP Daniel Henrique Barboza
8 siblings, 0 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 10:14 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
The iopmp_dma_rw() function performs memory read/write operations to system
memory with support for IOPMP. It sends transaction information to the IOPMP
for partial hit detection.
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
hw/misc/riscv_iopmp.c | 68 +++++++++++++++++++++++++++++++++++
include/hw/misc/riscv_iopmp.h | 3 +-
2 files changed, 70 insertions(+), 1 deletion(-)
diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
index 374bf5c610..0be32ca819 100644
--- a/hw/misc/riscv_iopmp.c
+++ b/hw/misc/riscv_iopmp.c
@@ -1217,5 +1217,73 @@ void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid)
cpu->cfg.iopmp_rrid = rrid;
}
+static void send_transaction_start(IopmpState *s, uint32_t rrid)
+{
+ int flag = 0;
+ if (rrid < s->rrid_num) {
+ while (flag == 0) {
+ /* Wait last transaction of rrid complete */
+ while (s->transaction_state[rrid].running) {
+ ;
+ }
+ qemu_mutex_lock(&s->iopmp_transaction_mutex);
+ /* Check status again */
+ if (s->transaction_state[rrid].running == false) {
+ s->transaction_state[rrid].running = true;
+ s->transaction_state[rrid].supported = true;
+ flag = 1;
+ }
+ qemu_mutex_unlock(&s->iopmp_transaction_mutex);
+ }
+ }
+}
+
+static void send_transaction_complete(IopmpState *s, uint32_t rrid)
+{
+ if (rrid < s->rrid_num) {
+ qemu_mutex_lock(&s->iopmp_transaction_mutex);
+ s->transaction_state[rrid].running = false;
+ s->transaction_state[rrid].supported = false;
+ qemu_mutex_unlock(&s->iopmp_transaction_mutex);
+ }
+}
+
+static void send_transaction_info(IopmpState *s, uint32_t rrid,
+ hwaddr start_addr, hwaddr size)
+{
+ if (rrid < s->rrid_num) {
+ s->transaction_state[rrid].start_addr = start_addr;
+ s->transaction_state[rrid].end_addr = start_addr + size - 1;
+ }
+}
+
+/*
+ * Perform address_space_rw to system memory and send transaction information
+ * to correspond IOPMP for partially hit detection.
+ */
+MemTxResult iopmp_dma_rw(hwaddr addr, uint32_t rrid, void *buf, hwaddr len,
+ bool is_write)
+{
+ MemTxResult result;
+ MemTxAttrs attrs;
+ iopmp_protection_memmap *map;
+ /* Find which IOPMP is responsible for receiving transaction information */
+ QLIST_FOREACH(map, &iopmp_protection_memmaps, list) {
+ if (addr >= map->entry.base &&
+ addr < map->entry.base + map->entry.size) {
+ send_transaction_start(map->iopmp_s, rrid);
+ send_transaction_info(map->iopmp_s, rrid, addr, len);
+ break;
+ }
+ }
+
+ attrs.requester_id = rrid;
+ result = address_space_rw(&address_space_memory, addr, attrs, buf, len,
+ is_write);
+ if (map) {
+ send_transaction_complete(map->iopmp_s, rrid);
+ }
+ return result;
+}
type_init(iopmp_register_types);
diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
index 7e7da56d10..d87395170d 100644
--- a/include/hw/misc/riscv_iopmp.h
+++ b/include/hw/misc/riscv_iopmp.h
@@ -168,5 +168,6 @@ typedef struct IopmpState {
void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
uint32_t mapentry_num);
void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid);
-
+MemTxResult iopmp_dma_rw(hwaddr addr, uint32_t rrid, void *buf, hwaddr len,
+ bool is_write);
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
` (6 preceding siblings ...)
2024-07-15 10:14 ` [PATCH v8 7/8] hw/misc/riscv_iopmp: Add DMA operation with IOPMP support API Ethan Chen via
@ 2024-07-15 10:14 ` Ethan Chen via
2024-08-08 4:01 ` Alistair Francis
2024-11-05 18:36 ` [PATCH v8 0/8] Support RISC-V IOPMP Daniel Henrique Barboza
8 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-07-15 10:14 UTC (permalink / raw)
To: qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, dbarboza, zhiwei_liu,
qemu-riscv, Ethan Chen
- Add 'iopmp=on' option to enable IOPMP. It adds an iopmp device virt machine
to protect all regions of system memory, and configures RRID of CPU.
Signed-off-by: Ethan Chen <ethan84@andestech.com>
---
docs/system/riscv/virt.rst | 5 +++
hw/riscv/Kconfig | 1 +
hw/riscv/virt.c | 63 ++++++++++++++++++++++++++++++++++++++
include/hw/riscv/virt.h | 5 ++-
4 files changed, 73 insertions(+), 1 deletion(-)
diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
index 9a06f95a34..9fd006ccc2 100644
--- a/docs/system/riscv/virt.rst
+++ b/docs/system/riscv/virt.rst
@@ -116,6 +116,11 @@ The following machine-specific options are supported:
having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
the default number of per-HART VS-level AIA IMSIC pages is 0.
+- iopmp=[on|off]
+
+ When this option is "on", an IOPMP device is added to machine. IOPMP checks
+ memory transcations in system memory. This option is assumed to be "off".
+
Running Linux kernel
--------------------
diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig
index a2030e3a6f..0b45a5ade2 100644
--- a/hw/riscv/Kconfig
+++ b/hw/riscv/Kconfig
@@ -56,6 +56,7 @@ config RISCV_VIRT
select PLATFORM_BUS
select ACPI
select ACPI_PCI
+ select RISCV_IOPMP
config SHAKTI_C
bool
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index bc0893e087..5a03c03c4a 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -55,6 +55,7 @@
#include "hw/acpi/aml-build.h"
#include "qapi/qapi-visit-common.h"
#include "hw/virtio/virtio-iommu.h"
+#include "hw/misc/riscv_iopmp.h"
/* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated by QEMU. */
static bool virt_use_kvm_aia(RISCVVirtState *s)
@@ -82,6 +83,7 @@ static const MemMapEntry virt_memmap[] = {
[VIRT_UART0] = { 0x10000000, 0x100 },
[VIRT_VIRTIO] = { 0x10001000, 0x1000 },
[VIRT_FW_CFG] = { 0x10100000, 0x18 },
+ [VIRT_IOPMP] = { 0x10200000, 0x100000 },
[VIRT_FLASH] = { 0x20000000, 0x4000000 },
[VIRT_IMSIC_M] = { 0x24000000, VIRT_IMSIC_MAX_SIZE },
[VIRT_IMSIC_S] = { 0x28000000, VIRT_IMSIC_MAX_SIZE },
@@ -90,6 +92,11 @@ static const MemMapEntry virt_memmap[] = {
[VIRT_DRAM] = { 0x80000000, 0x0 },
};
+static const MemMapEntry iopmp_protect_memmap[] = {
+ /* IOPMP protect all regions by default */
+ {0, 0xFFFFFFFF},
+};
+
/* PCIe high mmio is fixed for RV32 */
#define VIRT32_HIGH_PCIE_MMIO_BASE 0x300000000ULL
#define VIRT32_HIGH_PCIE_MMIO_SIZE (4 * GiB)
@@ -1024,6 +1031,24 @@ static void create_fdt_virtio_iommu(RISCVVirtState *s, uint16_t bdf)
bdf + 1, iommu_phandle, bdf + 1, 0xffff - bdf);
}
+static void create_fdt_iopmp(RISCVVirtState *s, const MemMapEntry *memmap,
+ uint32_t irq_mmio_phandle) {
+ g_autofree char *name = NULL;
+ MachineState *ms = MACHINE(s);
+
+ name = g_strdup_printf("/soc/iopmp@%lx", (long)memmap[VIRT_IOPMP].base);
+ qemu_fdt_add_subnode(ms->fdt, name);
+ qemu_fdt_setprop_string(ms->fdt, name, "compatible", "riscv_iopmp");
+ qemu_fdt_setprop_cells(ms->fdt, name, "reg", 0x0, memmap[VIRT_IOPMP].base,
+ 0x0, memmap[VIRT_IOPMP].size);
+ qemu_fdt_setprop_cell(ms->fdt, name, "interrupt-parent", irq_mmio_phandle);
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
+ qemu_fdt_setprop_cell(ms->fdt, name, "interrupts", IOPMP_IRQ);
+ } else {
+ qemu_fdt_setprop_cells(ms->fdt, name, "interrupts", IOPMP_IRQ, 0x4);
+ }
+}
+
static void finalize_fdt(RISCVVirtState *s)
{
uint32_t phandle = 1, irq_mmio_phandle = 1, msi_pcie_phandle = 1;
@@ -1042,6 +1067,10 @@ static void finalize_fdt(RISCVVirtState *s)
create_fdt_uart(s, virt_memmap, irq_mmio_phandle);
create_fdt_rtc(s, virt_memmap, irq_mmio_phandle);
+
+ if (s->have_iopmp) {
+ create_fdt_iopmp(s, virt_memmap, irq_mmio_phandle);
+ }
}
static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap)
@@ -1425,6 +1454,7 @@ static void virt_machine_init(MachineState *machine)
DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip;
int i, base_hartid, hart_count;
int socket_count = riscv_socket_count(machine);
+ int cpu, socket;
/* Check socket count limit */
if (VIRT_SOCKETS_MAX < socket_count) {
@@ -1606,6 +1636,19 @@ static void virt_machine_init(MachineState *machine)
}
virt_flash_map(s, system_memory);
+ if (s->have_iopmp) {
+ DeviceState *iopmp_dev = sysbus_create_simple(TYPE_IOPMP,
+ memmap[VIRT_IOPMP].base,
+ qdev_get_gpio_in(DEVICE(mmio_irqchip), IOPMP_IRQ));
+
+ for (socket = 0; socket < socket_count; socket++) {
+ for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
+ iopmp_setup_cpu(&s->soc[socket].harts[cpu], 0);
+ }
+ }
+ iopmp_setup_system_memory(iopmp_dev, iopmp_protect_memmap, 1);
+ }
+
/* load/create device tree */
if (machine->dtb) {
machine->fdt = load_device_tree(machine->dtb, &s->fdt_size);
@@ -1702,6 +1745,20 @@ static void virt_set_aclint(Object *obj, bool value, Error **errp)
s->have_aclint = value;
}
+static bool virt_get_iopmp(Object *obj, Error **errp)
+{
+ RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
+
+ return s->have_iopmp;
+}
+
+static void virt_set_iopmp(Object *obj, bool value, Error **errp)
+{
+ RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
+
+ s->have_iopmp = value;
+}
+
bool virt_is_acpi_enabled(RISCVVirtState *s)
{
return s->acpi != ON_OFF_AUTO_OFF;
@@ -1814,6 +1871,12 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
NULL, NULL);
object_class_property_set_description(oc, "acpi",
"Enable ACPI");
+
+ object_class_property_add_bool(oc, "iopmp", virt_get_iopmp,
+ virt_set_iopmp);
+ object_class_property_set_description(oc, "iopmp",
+ "Set on/off to enable/disable "
+ "iopmp device");
}
static const TypeInfo virt_machine_typeinfo = {
diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
index c0dc41ff9a..009b4ebea7 100644
--- a/include/hw/riscv/virt.h
+++ b/include/hw/riscv/virt.h
@@ -55,6 +55,7 @@ struct RISCVVirtState {
int fdt_size;
bool have_aclint;
+ bool have_iopmp;
RISCVVirtAIAType aia_type;
int aia_guests;
char *oem_id;
@@ -84,12 +85,14 @@ enum {
VIRT_PCIE_MMIO,
VIRT_PCIE_PIO,
VIRT_PLATFORM_BUS,
- VIRT_PCIE_ECAM
+ VIRT_PCIE_ECAM,
+ VIRT_IOPMP,
};
enum {
UART0_IRQ = 10,
RTC_IRQ = 11,
+ IOPMP_IRQ = 12,
VIRTIO_IRQ = 1, /* 1 to 8 */
VIRTIO_COUNT = 8,
PCIE_IRQ = 0x20, /* 32 to 35 */
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device
2024-07-15 9:56 ` [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device Ethan Chen via
@ 2024-08-08 3:56 ` Alistair Francis
2024-08-09 9:42 ` Ethan Chen via
2024-08-09 10:03 ` Ethan Chen via
0 siblings, 2 replies; 27+ messages in thread
From: Alistair Francis @ 2024-08-08 3:56 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Jul 15, 2024 at 7:58 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
>
> Support basic functions of IOPMP specification v0.9.1 rapid-k model.
> The specification url:
> https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
>
> The IOPMP checks whether memory access from a device or CPU is valid.
> This implementation uses an IOMMU to modify the address space accessed
> by the device.
>
> For device access with IOMMUAccessFlags specifying read or write
> (IOMMU_RO or IOMMU_WO), the IOPMP checks the permission in
> iopmp_translate. If the access is valid, the target address space is
> downstream_as. If the access is blocked, it will be redirected to
> blocked_rwx_as.
>
> For CPU access with IOMMUAccessFlags not specifying read or write
> (IOMMU_NONE), the IOPMP translates the access to the corresponding
> address space based on the permission. If the access has full permission
> (rwx), the target address space is downstream_as. If the access has
> limited permission, the target address space is blocked_ followed by
> the lacked permissions.
>
> The operation of a blocked region can trigger an IOPMP interrupt, a bus
> error, or it can respond with success and fabricated data, depending on
> the value of the IOPMP ERR_CFG register.
>
> Signed-off-by: Ethan Chen <ethan84@andestech.com>
> ---
> hw/misc/Kconfig | 3 +
> hw/misc/meson.build | 1 +
> hw/misc/riscv_iopmp.c | 1154 +++++++++++++++++++++++++++++++++
> hw/misc/trace-events | 3 +
> include/hw/misc/riscv_iopmp.h | 168 +++++
> 5 files changed, 1329 insertions(+)
> create mode 100644 hw/misc/riscv_iopmp.c
> create mode 100644 include/hw/misc/riscv_iopmp.h
>
> diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
> index 1e08785b83..427f0c702e 100644
> --- a/hw/misc/Kconfig
> +++ b/hw/misc/Kconfig
> @@ -213,4 +213,7 @@ config IOSB
> config XLNX_VERSAL_TRNG
> bool
>
> +config RISCV_IOPMP
> + bool
> +
> source macio/Kconfig
> diff --git a/hw/misc/meson.build b/hw/misc/meson.build
> index 2ca8717be2..d9006e1d81 100644
> --- a/hw/misc/meson.build
> +++ b/hw/misc/meson.build
> @@ -34,6 +34,7 @@ system_ss.add(when: 'CONFIG_SIFIVE_E_PRCI', if_true: files('sifive_e_prci.c'))
> system_ss.add(when: 'CONFIG_SIFIVE_E_AON', if_true: files('sifive_e_aon.c'))
> system_ss.add(when: 'CONFIG_SIFIVE_U_OTP', if_true: files('sifive_u_otp.c'))
> system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c'))
> +specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp.c'))
>
> subdir('macio')
>
> diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> new file mode 100644
> index 0000000000..db43e3c73f
> --- /dev/null
> +++ b/hw/misc/riscv_iopmp.c
> @@ -0,0 +1,1154 @@
> +/*
> + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> + *
> + * Copyright (c) 2023-2024 Andes Tech. Corp.
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/log.h"
> +#include "qapi/error.h"
> +#include "trace.h"
> +#include "exec/exec-all.h"
> +#include "exec/address-spaces.h"
> +#include "hw/qdev-properties.h"
> +#include "hw/sysbus.h"
> +#include "hw/misc/riscv_iopmp.h"
> +#include "memory.h"
> +#include "hw/irq.h"
> +#include "hw/registerfields.h"
> +#include "trace.h"
> +
> +#define TYPE_IOPMP_IOMMU_MEMORY_REGION "iopmp-iommu-memory-region"
> +
> +REG32(VERSION, 0x00)
> + FIELD(VERSION, VENDOR, 0, 24)
> + FIELD(VERSION, SPECVER , 24, 8)
> +REG32(IMP, 0x04)
> + FIELD(IMP, IMPID, 0, 32)
> +REG32(HWCFG0, 0x08)
> + FIELD(HWCFG0, MODEL, 0, 4)
> + FIELD(HWCFG0, TOR_EN, 4, 1)
> + FIELD(HWCFG0, SPS_EN, 5, 1)
> + FIELD(HWCFG0, USER_CFG_EN, 6, 1)
> + FIELD(HWCFG0, PRIENT_PROG, 7, 1)
> + FIELD(HWCFG0, RRID_TRANSL_EN, 8, 1)
> + FIELD(HWCFG0, RRID_TRANSL_PROG, 9, 1)
> + FIELD(HWCFG0, CHK_X, 10, 1)
> + FIELD(HWCFG0, NO_X, 11, 1)
> + FIELD(HWCFG0, NO_W, 12, 1)
> + FIELD(HWCFG0, STALL_EN, 13, 1)
> + FIELD(HWCFG0, PEIS, 14, 1)
> + FIELD(HWCFG0, PEES, 15, 1)
> + FIELD(HWCFG0, MFR_EN, 16, 1)
> + FIELD(HWCFG0, MD_NUM, 24, 7)
> + FIELD(HWCFG0, ENABLE, 31, 1)
> +REG32(HWCFG1, 0x0C)
> + FIELD(HWCFG1, RRID_NUM, 0, 16)
> + FIELD(HWCFG1, ENTRY_NUM, 16, 16)
> +REG32(HWCFG2, 0x10)
> + FIELD(HWCFG2, PRIO_ENTRY, 0, 16)
> + FIELD(HWCFG2, RRID_TRANSL, 16, 16)
> +REG32(ENTRYOFFSET, 0x14)
> + FIELD(ENTRYOFFSET, OFFSET, 0, 32)
> +REG32(MDSTALL, 0x30)
> + FIELD(MDSTALL, EXEMPT, 0, 1)
> + FIELD(MDSTALL, MD, 1, 31)
> +REG32(MDSTALLH, 0x34)
> + FIELD(MDSTALLH, MD, 0, 32)
> +REG32(RRIDSCP, 0x38)
> + FIELD(RRIDSCP, RRID, 0, 16)
> + FIELD(RRIDSCP, OP, 30, 2)
> +REG32(MDLCK, 0x40)
> + FIELD(MDLCK, L, 0, 1)
> + FIELD(MDLCK, MD, 1, 31)
> +REG32(MDLCKH, 0x44)
> + FIELD(MDLCKH, MDH, 0, 32)
> +REG32(MDCFGLCK, 0x48)
> + FIELD(MDCFGLCK, L, 0, 1)
> + FIELD(MDCFGLCK, F, 1, 7)
> +REG32(ENTRYLCK, 0x4C)
> + FIELD(ENTRYLCK, L, 0, 1)
> + FIELD(ENTRYLCK, F, 1, 16)
> +REG32(ERR_CFG, 0x60)
> + FIELD(ERR_CFG, L, 0, 1)
> + FIELD(ERR_CFG, IE, 1, 1)
> + FIELD(ERR_CFG, IRE, 2, 1)
> + FIELD(ERR_CFG, IWE, 3, 1)
> + FIELD(ERR_CFG, IXE, 4, 1)
> + FIELD(ERR_CFG, RRE, 5, 1)
> + FIELD(ERR_CFG, RWE, 6, 1)
> + FIELD(ERR_CFG, RXE, 7, 1)
> +REG32(ERR_REQINFO, 0x64)
> + FIELD(ERR_REQINFO, V, 0, 1)
> + FIELD(ERR_REQINFO, TTYPE, 1, 2)
> + FIELD(ERR_REQINFO, ETYPE, 4, 3)
> + FIELD(ERR_REQINFO, SVC, 7, 1)
> +REG32(ERR_REQADDR, 0x68)
> + FIELD(ERR_REQADDR, ADDR, 0, 32)
> +REG32(ERR_REQADDRH, 0x6C)
> + FIELD(ERR_REQADDRH, ADDRH, 0, 32)
> +REG32(ERR_REQID, 0x70)
> + FIELD(ERR_REQID, RRID, 0, 16)
> + FIELD(ERR_REQID, EID, 16, 16)
> +REG32(ERR_MFR, 0x74)
> + FIELD(ERR_MFR, SVW, 0, 16)
> + FIELD(ERR_MFR, SVI, 16, 12)
> + FIELD(ERR_MFR, SVS, 31, 1)
> +REG32(MDCFG0, 0x800)
> + FIELD(MDCFG0, T, 0, 16)
> +REG32(SRCMD_EN0, 0x1000)
> + FIELD(SRCMD_EN0, L, 0, 1)
> + FIELD(SRCMD_EN0, MD, 1, 31)
> +REG32(SRCMD_ENH0, 0x1004)
> + FIELD(SRCMD_ENH0, MDH, 0, 32)
> +REG32(SRCMD_R0, 0x1008)
> + FIELD(SRCMD_R0, MD, 1, 31)
> +REG32(SRCMD_RH0, 0x100C)
> + FIELD(SRCMD_RH0, MDH, 0, 32)
> +REG32(SRCMD_W0, 0x1010)
> + FIELD(SRCMD_W0, MD, 1, 31)
> +REG32(SRCMD_WH0, 0x1014)
> + FIELD(SRCMD_WH0, MDH, 0, 32)
> +
> +FIELD(ENTRY_ADDR, ADDR, 0, 32)
> +FIELD(ENTRY_ADDRH, ADDRH, 0, 32)
> +
> +FIELD(ENTRY_CFG, R, 0, 1)
> +FIELD(ENTRY_CFG, W, 1, 1)
> +FIELD(ENTRY_CFG, X, 2, 1)
> +FIELD(ENTRY_CFG, A, 3, 2)
> +FIELD(ENTRY_CFG, SIRE, 5, 1)
> +FIELD(ENTRY_CFG, SIWE, 6, 1)
> +FIELD(ENTRY_CFG, SIXE, 7, 1)
> +FIELD(ENTRY_CFG, SERE, 8, 1)
> +FIELD(ENTRY_CFG, SEWE, 9, 1)
> +FIELD(ENTRY_CFG, SEXE, 10, 1)
> +
> +FIELD(ENTRY_USER_CFG, IM, 0, 32)
> +
> +/* Offsets to SRCMD_EN(i) */
> +#define SRCMD_EN_OFFSET 0x0
> +#define SRCMD_ENH_OFFSET 0x4
> +#define SRCMD_R_OFFSET 0x8
> +#define SRCMD_RH_OFFSET 0xC
> +#define SRCMD_W_OFFSET 0x10
> +#define SRCMD_WH_OFFSET 0x14
> +
> +/* Offsets to ENTRY_ADDR(i) */
> +#define ENTRY_ADDR_OFFSET 0x0
> +#define ENTRY_ADDRH_OFFSET 0x4
> +#define ENTRY_CFG_OFFSET 0x8
> +#define ENTRY_USER_CFG_OFFSET 0xC
> +
> +/* Memmap for parallel IOPMPs */
> +typedef struct iopmp_protection_memmap {
> + MemMapEntry entry;
> + IopmpState *iopmp_s;
> + QLIST_ENTRY(iopmp_protection_memmap) list;
> +} iopmp_protection_memmap;
> +QLIST_HEAD(, iopmp_protection_memmap)
> + iopmp_protection_memmaps = QLIST_HEAD_INITIALIZER(iopmp_protection_memmaps);
> +
> +static void iopmp_iommu_notify(IopmpState *s)
> +{
> + IOMMUTLBEvent event = {
> + .entry = {
> + .iova = 0,
> + .translated_addr = 0,
> + .addr_mask = -1ULL,
> + .perm = IOMMU_NONE,
> + },
> + .type = IOMMU_NOTIFIER_UNMAP,
> + };
> +
> + for (int i = 0; i < s->rrid_num; i++) {
> + memory_region_notify_iommu(&s->iommu, i, event);
> + }
> +}
> +
> +static void iopmp_decode_napot(uint64_t a, uint64_t *sa,
> + uint64_t *ea)
> +{
> + /*
> + * aaaa...aaa0 8-byte NAPOT range
> + * aaaa...aa01 16-byte NAPOT range
> + * aaaa...a011 32-byte NAPOT range
> + * ...
> + * aa01...1111 2^XLEN-byte NAPOT range
> + * a011...1111 2^(XLEN+1)-byte NAPOT range
> + * 0111...1111 2^(XLEN+2)-byte NAPOT range
> + * 1111...1111 Reserved
> + */
> +
> + a = (a << 2) | 0x3;
> + *sa = a & (a + 1);
> + *ea = a | (a + 1);
> +}
> +
> +static void iopmp_update_rule(IopmpState *s, uint32_t entry_index)
> +{
> + uint8_t this_cfg = s->regs.entry[entry_index].cfg_reg;
> + uint64_t this_addr = s->regs.entry[entry_index].addr_reg |
> + ((uint64_t)s->regs.entry[entry_index].addrh_reg << 32);
> + uint64_t prev_addr = 0u;
> + uint64_t sa = 0u;
> + uint64_t ea = 0u;
> +
> + if (entry_index >= 1u) {
> + prev_addr = s->regs.entry[entry_index - 1].addr_reg |
> + ((uint64_t)s->regs.entry[entry_index - 1].addrh_reg << 32);
> + }
> +
> + switch (FIELD_EX32(this_cfg, ENTRY_CFG, A)) {
> + case IOPMP_AMATCH_OFF:
> + sa = 0u;
> + ea = -1;
> + break;
> +
> + case IOPMP_AMATCH_TOR:
> + sa = (prev_addr) << 2; /* shift up from [xx:0] to [xx+2:2] */
> + ea = ((this_addr) << 2) - 1u;
> + if (sa > ea) {
> + sa = ea = 0u;
> + }
> + break;
> +
> + case IOPMP_AMATCH_NA4:
> + sa = this_addr << 2; /* shift up from [xx:0] to [xx+2:2] */
> + ea = (sa + 4u) - 1u;
> + break;
> +
> + case IOPMP_AMATCH_NAPOT:
> + iopmp_decode_napot(this_addr, &sa, &ea);
> + break;
> +
> + default:
> + sa = 0u;
> + ea = 0u;
> + break;
> + }
> +
> + s->entry_addr[entry_index].sa = sa;
> + s->entry_addr[entry_index].ea = ea;
> + iopmp_iommu_notify(s);
> +}
> +
> +static uint64_t iopmp_read(void *opaque, hwaddr addr, unsigned size)
> +{
> + IopmpState *s = IOPMP(opaque);
> + uint32_t rz = 0;
> + uint32_t offset, idx;
> +
> + switch (addr) {
> + case A_VERSION:
> + rz = VENDER_VIRT << R_VERSION_VENDOR_SHIFT |
> + SPECVER_0_9_1 << R_VERSION_SPECVER_SHIFT;
It would be better to use the FIELD_DP32() macro instead of the manual shifts
> + break;
> + case A_IMP:
> + rz = IMPID_0_9_1;
> + break;
> + case A_HWCFG0:
> + rz = s->model << R_HWCFG0_MODEL_SHIFT |
> + 1 << R_HWCFG0_TOR_EN_SHIFT |
> + 0 << R_HWCFG0_SPS_EN_SHIFT |
> + 0 << R_HWCFG0_USER_CFG_EN_SHIFT |
> + s->prient_prog << R_HWCFG0_PRIENT_PROG_SHIFT |
> + 0 << R_HWCFG0_RRID_TRANSL_EN_SHIFT |
> + 0 << R_HWCFG0_RRID_TRANSL_PROG_SHIFT |
> + 1 << R_HWCFG0_CHK_X_SHIFT |
> + 0 << R_HWCFG0_NO_X_SHIFT |
> + 0 << R_HWCFG0_NO_W_SHIFT |
> + 0 << R_HWCFG0_STALL_EN_SHIFT |
> + 0 << R_HWCFG0_PEIS_SHIFT |
> + 0 << R_HWCFG0_PEES_SHIFT |
> + 0 << R_HWCFG0_MFR_EN_SHIFT |
> + s->md_num << R_HWCFG0_MD_NUM_SHIFT |
> + s->enable << R_HWCFG0_ENABLE_SHIFT ;
> + break;
> + case A_HWCFG1:
> + rz = s->rrid_num << R_HWCFG1_RRID_NUM_SHIFT |
> + s->entry_num << R_HWCFG1_ENTRY_NUM_SHIFT;
> + break;
> + case A_HWCFG2:
> + rz = s->prio_entry << R_HWCFG2_PRIO_ENTRY_SHIFT;
> + break;
> + case A_ENTRYOFFSET:
> + rz = s->entry_offset;
> + break;
> + case A_ERR_CFG:
> + rz = s->regs.err_cfg;
> + break;
> + case A_MDLCK:
> + rz = s->regs.mdlck;
> + break;
> + case A_MDLCKH:
> + rz = s->regs.mdlckh;
> + break;
> + case A_MDCFGLCK:
> + rz = s->regs.mdcfglck;
> + break;
> + case A_ENTRYLCK:
> + rz = s->regs.entrylck;
> + break;
> + case A_ERR_REQADDR:
> + rz = s->regs.err_reqaddr & UINT32_MAX;
> + break;
> + case A_ERR_REQADDRH:
> + rz = s->regs.err_reqaddr >> 32;
> + break;
> + case A_ERR_REQID:
> + rz = s->regs.err_reqid;
> + break;
> + case A_ERR_REQINFO:
> + rz = s->regs.err_reqinfo;
> + break;
> +
> + default:
> + if (addr >= A_MDCFG0 &&
> + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> + offset = addr - A_MDCFG0;
> + idx = offset >> 2;
> + if (idx == 0 && offset == 0) {
> + rz = s->regs.mdcfg[idx];
> + } else {
> + /* Only MDCFG0 is implemented in rapid-k model */
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> + __func__, (int)addr);
> + }
> + } else if (addr >= A_SRCMD_EN0 &&
> + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> + offset = addr - A_SRCMD_EN0;
> + idx = offset >> 5;
> + offset &= 0x1f;
> +
> + switch (offset) {
> + case SRCMD_EN_OFFSET:
> + rz = s->regs.srcmd_en[idx];
> + break;
> + case SRCMD_ENH_OFFSET:
> + rz = s->regs.srcmd_enh[idx];
> + break;
> + default:
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> + __func__, (int)addr);
> + break;
> + }
> + } else if (addr >= s->entry_offset &&
> + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET +
> + 16 * (s->entry_num - 1)) {
> + offset = addr - s->entry_offset;
> + idx = offset >> 4;
> + offset &= 0xf;
> +
> + switch (offset) {
> + case ENTRY_ADDR_OFFSET:
> + rz = s->regs.entry[idx].addr_reg;
> + break;
> + case ENTRY_ADDRH_OFFSET:
> + rz = s->regs.entry[idx].addrh_reg;
> + break;
> + case ENTRY_CFG_OFFSET:
> + rz = s->regs.entry[idx].cfg_reg;
> + break;
> + case ENTRY_USER_CFG_OFFSET:
> + /* Does not support user customized permission */
> + rz = 0;
> + break;
> + default:
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> + __func__, (int)addr);
> + break;
> + }
> + } else {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> + __func__, (int)addr);
> + }
> + break;
> + }
> + trace_iopmp_read(addr, rz);
> + return rz;
> +}
> +
> +static void
> +iopmp_write(void *opaque, hwaddr addr, uint64_t value, unsigned size)
> +{
> + IopmpState *s = IOPMP(opaque);
> + uint32_t offset, idx;
> + uint32_t value32 = value;
> +
> + trace_iopmp_write(addr, value32);
> +
> + switch (addr) {
> + case A_VERSION: /* RO */
> + break;
> + case A_IMP: /* RO */
> + break;
> + case A_HWCFG0:
> + if (FIELD_EX32(value32, HWCFG0, PRIENT_PROG)) {
> + /* W1C */
> + s->prient_prog = 0;
> + }
> + if (FIELD_EX32(value32, HWCFG0, ENABLE)) {
> + /* W1S */
> + s->enable = 1;
> + iopmp_iommu_notify(s);
> + }
> + break;
> + case A_HWCFG1: /* RO */
> + break;
> + case A_HWCFG2:
> + if (s->prient_prog) {
> + s->prio_entry = FIELD_EX32(value32, HWCFG2, PRIO_ENTRY);
> + }
> + break;
> + case A_ERR_CFG:
> + if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) {
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, L,
> + FIELD_EX32(value32, ERR_CFG, L));
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IE,
> + FIELD_EX32(value32, ERR_CFG, IE));
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IRE,
> + FIELD_EX32(value32, ERR_CFG, IRE));
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RRE,
> + FIELD_EX32(value32, ERR_CFG, RRE));
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IWE,
> + FIELD_EX32(value32, ERR_CFG, IWE));
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RWE,
> + FIELD_EX32(value32, ERR_CFG, RWE));
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IXE,
> + FIELD_EX32(value32, ERR_CFG, IXE));
> + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RXE,
> + FIELD_EX32(value32, ERR_CFG, RXE));
> + }
> + break;
> + case A_MDLCK:
> + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> + s->regs.mdlck = value32;
> + }
> + break;
> + case A_MDLCKH:
> + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> + s->regs.mdlckh = value32;
> + }
> + break;
> + case A_MDCFGLCK:
> + if (!FIELD_EX32(s->regs.mdcfglck, MDCFGLCK, L)) {
> + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F,
> + FIELD_EX32(value32, MDCFGLCK, F));
> + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L,
> + FIELD_EX32(value32, MDCFGLCK, L));
> + }
> + break;
> + case A_ENTRYLCK:
> + if (!(FIELD_EX32(s->regs.entrylck, ENTRYLCK, L))) {
> + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, F,
> + FIELD_EX32(value32, ENTRYLCK, F));
> + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, L,
> + FIELD_EX32(value32, ENTRYLCK, L));
> + }
> + case A_ERR_REQADDR: /* RO */
> + break;
> + case A_ERR_REQADDRH: /* RO */
> + break;
> + case A_ERR_REQID: /* RO */
> + break;
> + case A_ERR_REQINFO:
> + if (FIELD_EX32(value32, ERR_REQINFO, V)) {
> + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo,
> + ERR_REQINFO, V, 0);
> + qemu_set_irq(s->irq, 0);
> + }
> + break;
> +
> + default:
> + if (addr >= A_MDCFG0 &&
> + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> + offset = addr - A_MDCFG0;
> + idx = offset >> 2;
> + /* RO in rapid-k model */
> + if (idx > 0) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> + __func__, (int)addr);
> + }
> + } else if (addr >= A_SRCMD_EN0 &&
> + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> + offset = addr - A_SRCMD_EN0;
> + idx = offset >> 5;
> + offset &= 0x1f;
> +
> + if (offset % 4) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> + __func__, (int)addr);
> + } else if (FIELD_EX32(s->regs.srcmd_en[idx], SRCMD_EN0, L)
> + == 0) {
> + switch (offset) {
> + case SRCMD_EN_OFFSET:
> + s->regs.srcmd_en[idx] =
> + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, L,
> + FIELD_EX32(value32, SRCMD_EN0, L));
> +
> + /* MD field is protected by mdlck */
> + value32 = (value32 & ~s->regs.mdlck) |
> + (s->regs.srcmd_en[idx] & s->regs.mdlck);
> + s->regs.srcmd_en[idx] =
> + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, MD,
> + FIELD_EX32(value32, SRCMD_EN0, MD));
> + break;
> + case SRCMD_ENH_OFFSET:
> + value32 = (value32 & ~s->regs.mdlckh) |
> + (s->regs.srcmd_enh[idx] & s->regs.mdlckh);
> + s->regs.srcmd_enh[idx] =
> + FIELD_DP32(s->regs.srcmd_enh[idx], SRCMD_ENH0, MDH,
> + value32);
> + break;
> + default:
> + break;
> + }
> + }
> + } else if (addr >= s->entry_offset &&
> + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET
> + + 16 * (s->entry_num - 1)) {
> + offset = addr - s->entry_offset;
> + idx = offset >> 4;
> + offset &= 0xf;
> +
> + /* index < ENTRYLCK_F is protected */
> + if (idx >= FIELD_EX32(s->regs.entrylck, ENTRYLCK, F)) {
> + switch (offset) {
> + case ENTRY_ADDR_OFFSET:
> + s->regs.entry[idx].addr_reg = value32;
> + break;
> + case ENTRY_ADDRH_OFFSET:
> + s->regs.entry[idx].addrh_reg = value32;
> + break;
> + case ENTRY_CFG_OFFSET:
> + s->regs.entry[idx].cfg_reg = value32;
> + break;
> + case ENTRY_USER_CFG_OFFSET:
> + /* Does not support user customized permission */
> + break;
> + default:
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> + __func__, (int)addr);
> + break;
> + }
> + iopmp_update_rule(s, idx);
> + if (idx + 1 < s->entry_num &&
> + FIELD_EX32(s->regs.entry[idx + 1].cfg_reg, ENTRY_CFG, A) ==
> + IOPMP_AMATCH_TOR) {
> + iopmp_update_rule(s, idx + 1);
> + }
> + }
> + } else {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", __func__,
> + (int)addr);
> + }
> + }
> +}
> +
> +/* Match entry in memory domain */
> +static int match_entry_md(IopmpState *s, int md_idx, hwaddr start_addr,
> + hwaddr end_addr, int *entry_idx,
> + int *prior_entry_in_tlb)
> +{
> + int entry_idx_s, entry_idx_e;
> + int result = ENTRY_NO_HIT;
> + int i = 0;
> + hwaddr tlb_sa = start_addr & ~(TARGET_PAGE_SIZE - 1);
> + hwaddr tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
> +
> + entry_idx_s = md_idx * s->regs.mdcfg[0];
> + entry_idx_e = (md_idx + 1) * s->regs.mdcfg[0];
> +
> + if (entry_idx_s >= s->entry_num) {
> + return result;
> + }
> + if (entry_idx_e > s->entry_num) {
> + entry_idx_e = s->entry_num;
> + }
> + i = entry_idx_s;
> + for (i = entry_idx_s; i < entry_idx_e; i++) {
> + if (FIELD_EX32(s->regs.entry[i].cfg_reg, ENTRY_CFG, A) ==
> + IOPMP_AMATCH_OFF) {
> + continue;
> + }
> + if (start_addr >= s->entry_addr[i].sa &&
> + start_addr <= s->entry_addr[i].ea) {
> + /* Check end address */
> + if (end_addr >= s->entry_addr[i].sa &&
> + end_addr <= s->entry_addr[i].ea) {
> + *entry_idx = i;
> + return ENTRY_HIT;
> + } else if (i >= s->prio_entry) {
> + /* Continue for non-prio_entry */
> + continue;
> + } else {
> + *entry_idx = i;
> + return ENTRY_PAR_HIT;
> + }
> + } else if (end_addr >= s->entry_addr[i].sa &&
> + end_addr <= s->entry_addr[i].ea) {
> + /* Only end address matches the entry */
> + if (i >= s->prio_entry) {
> + continue;
> + } else {
> + *entry_idx = i;
> + return ENTRY_PAR_HIT;
> + }
> + } else if (start_addr < s->entry_addr[i].sa &&
> + end_addr > s->entry_addr[i].ea) {
> + if (i >= s->prio_entry) {
> + continue;
> + } else {
> + *entry_idx = i;
> + return ENTRY_PAR_HIT;
> + }
> + }
> + if (prior_entry_in_tlb != NULL) {
> + if ((s->entry_addr[i].sa >= tlb_sa &&
> + s->entry_addr[i].sa <= tlb_ea) ||
> + (s->entry_addr[i].ea >= tlb_sa &&
> + s->entry_addr[i].ea <= tlb_ea)) {
> + /*
> + * TLB should not use the cached result when the tlb contains
> + * higher priority entry
> + */
> + *prior_entry_in_tlb = 1;
> + }
> + }
> + }
> + return result;
> +}
> +
> +static int match_entry(IopmpState *s, int rrid, hwaddr start_addr,
> + hwaddr end_addr, int *match_md_idx,
> + int *match_entry_idx, int *prior_entry_in_tlb)
> +{
> + int cur_result = ENTRY_NO_HIT;
> + int result = ENTRY_NO_HIT;
> + /* Remove lock bit */
> + uint64_t srcmd_en = ((uint64_t)s->regs.srcmd_en[rrid] |
> + ((uint64_t)s->regs.srcmd_enh[rrid] << 32)) >> 1;
> +
> + for (int md_idx = 0; md_idx < s->md_num; md_idx++) {
> + if (srcmd_en & (1ULL << md_idx)) {
> + cur_result = match_entry_md(s, md_idx, start_addr, end_addr,
> + match_entry_idx, prior_entry_in_tlb);
> + if (cur_result == ENTRY_HIT || cur_result == ENTRY_PAR_HIT) {
> + *match_md_idx = md_idx;
> + return cur_result;
> + }
> + }
> + }
> + return result;
> +}
> +
> +static void iopmp_error_reaction(IopmpState *s, uint32_t id, hwaddr start,
> + uint32_t info)
> +{
> + if (!FIELD_EX32(s->regs.err_reqinfo, ERR_REQINFO, V)) {
> + s->regs.err_reqinfo = info;
> + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo, ERR_REQINFO, V,
> + 1);
> + s->regs.err_reqid = id;
> + /* addr[LEN+2:2] */
> + s->regs.err_reqaddr = start >> 2;
> +
> + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_READ &&
> + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IRE)) {
> + qemu_set_irq(s->irq, 1);
> + }
> + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_WRITE &&
> + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IWE)) {
> + qemu_set_irq(s->irq, 1);
> + }
> + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_FETCH &&
> + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IXE)) {
> + qemu_set_irq(s->irq, 1);
> + }
> + }
> +}
> +
> +static IOMMUTLBEntry iopmp_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
> + IOMMUAccessFlags flags, int iommu_idx)
> +{
> + int rrid = iommu_idx;
> + IopmpState *s = IOPMP(container_of(iommu, IopmpState, iommu));
> + hwaddr start_addr, end_addr;
> + int entry_idx = -1;
> + int md_idx = -1;
> + int result;
> + uint32_t error_info = 0;
> + uint32_t error_id = 0;
> + int prior_entry_in_tlb = 0;
> + iopmp_permission iopmp_perm;
> + IOMMUTLBEntry entry = {
> + .target_as = &s->downstream_as,
> + .iova = addr,
> + .translated_addr = addr,
> + .addr_mask = 0,
> + .perm = IOMMU_NONE,
> + };
> +
> + if (!s->enable) {
> + /* Bypass IOPMP */
> + entry.addr_mask = -1ULL,
> + entry.perm = IOMMU_RW;
> + return entry;
> + }
> +
> + /* unknown RRID */
> + if (rrid >= s->rrid_num) {
> + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> + ERR_REQINFO_ETYPE_RRID);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> + iopmp_error_reaction(s, error_id, addr, error_info);
> + entry.target_as = &s->blocked_rwx_as;
> + entry.perm = IOMMU_RW;
> + return entry;
> + }
> +
> + if (s->transaction_state[rrid].supported == true) {
> + start_addr = s->transaction_state[rrid].start_addr;
> + end_addr = s->transaction_state[rrid].end_addr;
> + } else {
> + /* No transaction information, use the same address */
> + start_addr = addr;
> + end_addr = addr;
> + }
> +
> + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> + &prior_entry_in_tlb);
> + if (result == ENTRY_HIT) {
> + entry.addr_mask = s->entry_addr[entry_idx].ea -
> + s->entry_addr[entry_idx].sa;
> + if (prior_entry_in_tlb) {
> + /* Make TLB repeat iommu translation on every access */
I don't follow this, if we have a prior entry in the TLB cache we
don't cache the accesses?
> + entry.addr_mask = 0;
> + }
> + iopmp_perm = s->regs.entry[entry_idx].cfg_reg & IOPMP_RWX;
> + if (flags) {
> + if ((iopmp_perm & flags) == 0) {
> + /* Permission denied */
> + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> + ERR_REQINFO_ETYPE_READ + flags - 1);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> + iopmp_error_reaction(s, error_id, start_addr, error_info);
> + entry.target_as = &s->blocked_rwx_as;
> + entry.perm = IOMMU_RW;
> + } else {
> + entry.target_as = &s->downstream_as;
> + entry.perm = iopmp_perm;
> + }
> + } else {
> + /* CPU access with IOMMU_NONE flag */
> + if (iopmp_perm & IOPMP_XO) {
> + if ((iopmp_perm & IOPMP_RW) == IOPMP_RW) {
> + entry.target_as = &s->downstream_as;
> + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> + entry.target_as = &s->blocked_w_as;
> + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> + entry.target_as = &s->blocked_r_as;
> + } else {
> + entry.target_as = &s->blocked_rw_as;
> + }
> + } else {
> + if ((iopmp_perm & IOPMP_RW) == IOMMU_RW) {
> + entry.target_as = &s->blocked_x_as;
> + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> + entry.target_as = &s->blocked_wx_as;
> + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> + entry.target_as = &s->blocked_rx_as;
> + } else {
> + entry.target_as = &s->blocked_rwx_as;
> + }
> + }
> + entry.perm = IOMMU_RW;
> + }
> + } else {
> + if (flags) {
> + if (result == ENTRY_PAR_HIT) {
> + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> + ERR_REQINFO_ETYPE_PARHIT);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> + iopmp_error_reaction(s, error_id, start_addr, error_info);
> + } else {
> + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> + ERR_REQINFO_ETYPE_NOHIT);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> + iopmp_error_reaction(s, error_id, start_addr, error_info);
> + }
> + }
> + /* CPU access with IOMMU_NONE flag no_hit or par_hit */
> + entry.target_as = &s->blocked_rwx_as;
> + entry.perm = IOMMU_RW;
> + }
> + return entry;
> +}
> +
> +static const MemoryRegionOps iopmp_ops = {
> + .read = iopmp_read,
> + .write = iopmp_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 4, .max_access_size = 4}
> +};
> +
> +static MemTxResult iopmp_permssion_write(void *opaque, hwaddr addr,
> + uint64_t value, unsigned size,
> + MemTxAttrs attrs)
> +{
> + IopmpState *s = IOPMP(opaque);
> + return address_space_write(&s->downstream_as, addr, attrs, &value, size);
> +}
> +
> +static MemTxResult iopmp_permssion_read(void *opaque, hwaddr addr,
> + uint64_t *pdata, unsigned size,
> + MemTxAttrs attrs)
> +{
> + IopmpState *s = IOPMP(opaque);
> + return address_space_read(&s->downstream_as, addr, attrs, pdata, size);
> +}
> +
> +static MemTxResult iopmp_handle_block(void *opaque, hwaddr addr,
> + uint64_t *data, unsigned size,
> + MemTxAttrs attrs,
> + iopmp_access_type access_type) {
> + IopmpState *s = IOPMP(opaque);
> + int md_idx, entry_idx;
> + uint32_t error_info = 0;
> + uint32_t error_id = 0;
> + int rrid = attrs.requester_id;
> + int result;
> + hwaddr start_addr, end_addr;
> + start_addr = addr;
> + end_addr = addr;
> + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> + NULL);
> +
> + if (result == ENTRY_HIT) {
> + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> + access_type);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> + iopmp_error_reaction(s, error_id, start_addr, error_info);
> + } else if (result == ENTRY_PAR_HIT) {
> + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> + ERR_REQINFO_ETYPE_PARHIT);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE,
> + access_type);
> + iopmp_error_reaction(s, error_id, start_addr, error_info);
> + } else {
> + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> + ERR_REQINFO_ETYPE_NOHIT);
> + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> + iopmp_error_reaction(s, error_id, start_addr, error_info);
> + }
> +
> + if (access_type == IOPMP_ACCESS_READ) {
> +
> + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RRE)) {
> + case RRE_ERROR:
> + return MEMTX_ERROR;
> + break;
> + case RRE_SUCCESS_VALUE:
> + *data = s->fabricated_v;
> + return MEMTX_OK;
> + break;
> + default:
> + break;
> + }
> + return MEMTX_OK;
> + } else if (access_type == IOPMP_ACCESS_WRITE) {
> +
> + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RWE)) {
> + case RWE_ERROR:
> + return MEMTX_ERROR;
> + break;
> + case RWE_SUCCESS:
> + return MEMTX_OK;
> + break;
> + default:
> + break;
> + }
> + return MEMTX_OK;
> + } else {
> +
> + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RXE)) {
> + case RXE_ERROR:
> + return MEMTX_ERROR;
> + break;
> + case RXE_SUCCESS_VALUE:
> + *data = s->fabricated_v;
> + return MEMTX_OK;
> + break;
> + default:
> + break;
> + }
> + return MEMTX_OK;
> + }
> + return MEMTX_OK;
> +}
> +
> +static MemTxResult iopmp_block_write(void *opaque, hwaddr addr, uint64_t value,
> + unsigned size, MemTxAttrs attrs)
> +{
> + return iopmp_handle_block(opaque, addr, &value, size, attrs,
> + IOPMP_ACCESS_WRITE);
> +}
> +
> +static MemTxResult iopmp_block_read(void *opaque, hwaddr addr, uint64_t *pdata,
> + unsigned size, MemTxAttrs attrs)
> +{
> + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> + IOPMP_ACCESS_READ);
> +}
> +
> +static MemTxResult iopmp_block_fetch(void *opaque, hwaddr addr, uint64_t *pdata,
> + unsigned size, MemTxAttrs attrs)
> +{
> + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> + IOPMP_ACCESS_FETCH);
> +}
> +
> +static const MemoryRegionOps iopmp_block_rw_ops = {
> + .fetch_with_attrs = iopmp_permssion_read,
> + .read_with_attrs = iopmp_block_read,
> + .write_with_attrs = iopmp_block_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 1, .max_access_size = 8},
> +};
> +
> +static const MemoryRegionOps iopmp_block_w_ops = {
> + .fetch_with_attrs = iopmp_permssion_read,
> + .read_with_attrs = iopmp_permssion_read,
> + .write_with_attrs = iopmp_block_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 1, .max_access_size = 8},
> +};
> +
> +static const MemoryRegionOps iopmp_block_r_ops = {
> + .fetch_with_attrs = iopmp_permssion_read,
> + .read_with_attrs = iopmp_block_read,
> + .write_with_attrs = iopmp_permssion_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 1, .max_access_size = 8},
> +};
> +
> +static const MemoryRegionOps iopmp_block_rwx_ops = {
> + .fetch_with_attrs = iopmp_block_fetch,
> + .read_with_attrs = iopmp_block_read,
> + .write_with_attrs = iopmp_block_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 1, .max_access_size = 8},
> +};
> +
> +static const MemoryRegionOps iopmp_block_wx_ops = {
> + .fetch_with_attrs = iopmp_block_fetch,
> + .read_with_attrs = iopmp_permssion_read,
> + .write_with_attrs = iopmp_block_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 1, .max_access_size = 8},
> +};
> +
> +static const MemoryRegionOps iopmp_block_rx_ops = {
> + .fetch_with_attrs = iopmp_block_fetch,
> + .read_with_attrs = iopmp_block_read,
> + .write_with_attrs = iopmp_permssion_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 1, .max_access_size = 8},
> +};
> +
> +static const MemoryRegionOps iopmp_block_x_ops = {
> + .fetch_with_attrs = iopmp_block_fetch,
> + .read_with_attrs = iopmp_permssion_read,
> + .write_with_attrs = iopmp_permssion_write,
> + .endianness = DEVICE_NATIVE_ENDIAN,
> + .valid = {.min_access_size = 1, .max_access_size = 8},
> +};
> +
> +static void iopmp_realize(DeviceState *dev, Error **errp)
> +{
> + Object *obj = OBJECT(dev);
> + SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
> + IopmpState *s = IOPMP(dev);
> + uint64_t size;
> +
> + size = -1ULL;
> + s->model = IOPMP_MODEL_RAPIDK;
Should this be a property to allow other models in the future?
> + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> +
> + s->prient_prog = s->default_prient_prog;
> + s->rrid_num = MIN(s->rrid_num, IOPMP_MAX_RRID_NUM);
> + s->md_num = MIN(s->md_num, IOPMP_MAX_MD_NUM);
> + s->entry_num = s->md_num * s->k;
> + s->prio_entry = MIN(s->prio_entry, s->entry_num);
> +
> + s->regs.mdcfg = g_malloc0(s->md_num * sizeof(uint32_t));
> + s->regs.mdcfg[0] = s->k;
> +
> + s->regs.srcmd_en = g_malloc0(s->rrid_num * sizeof(uint32_t));
> + s->regs.srcmd_enh = g_malloc0(s->rrid_num * sizeof(uint32_t));
> + s->regs.entry = g_malloc0(s->entry_num * sizeof(iopmp_entry_t));
> + s->entry_addr = g_malloc0(s->entry_num * sizeof(iopmp_addr_t));
> + s->transaction_state = g_malloc0(s->rrid_num *
> + sizeof(iopmp_transaction_state));
> + qemu_mutex_init(&s->iopmp_transaction_mutex);
> +
> + memory_region_init_iommu(&s->iommu, sizeof(s->iommu),
> + TYPE_IOPMP_IOMMU_MEMORY_REGION,
> + obj, "riscv-iopmp-sysbus-iommu", UINT64_MAX);
> + memory_region_init_io(&s->mmio, obj, &iopmp_ops,
> + s, "iopmp-regs", 0x100000);
> + sysbus_init_mmio(sbd, &s->mmio);
> +
> + memory_region_init_io(&s->blocked_rw, NULL, &iopmp_block_rw_ops,
> + s, "iopmp-blocked-rw", size);
> + memory_region_init_io(&s->blocked_w, NULL, &iopmp_block_w_ops,
> + s, "iopmp-blocked-w", size);
> + memory_region_init_io(&s->blocked_r, NULL, &iopmp_block_r_ops,
> + s, "iopmp-blocked-r", size);
> +
> + memory_region_init_io(&s->blocked_rwx, NULL, &iopmp_block_rwx_ops,
> + s, "iopmp-blocked-rwx", size);
> + memory_region_init_io(&s->blocked_wx, NULL, &iopmp_block_wx_ops,
> + s, "iopmp-blocked-wx", size);
> + memory_region_init_io(&s->blocked_rx, NULL, &iopmp_block_rx_ops,
> + s, "iopmp-blocked-rx", size);
> + memory_region_init_io(&s->blocked_x, NULL, &iopmp_block_x_ops,
> + s, "iopmp-blocked-x", size);
> + address_space_init(&s->blocked_rw_as, &s->blocked_rw,
> + "iopmp-blocked-rw-as");
> + address_space_init(&s->blocked_w_as, &s->blocked_w,
> + "iopmp-blocked-w-as");
> + address_space_init(&s->blocked_r_as, &s->blocked_r,
> + "iopmp-blocked-r-as");
> +
> + address_space_init(&s->blocked_rwx_as, &s->blocked_rwx,
> + "iopmp-blocked-rwx-as");
> + address_space_init(&s->blocked_wx_as, &s->blocked_wx,
> + "iopmp-blocked-wx-as");
> + address_space_init(&s->blocked_rx_as, &s->blocked_rx,
> + "iopmp-blocked-rx-as");
> + address_space_init(&s->blocked_x_as, &s->blocked_x,
> + "iopmp-blocked-x-as");
> +}
> +
> +static void iopmp_reset(DeviceState *dev)
> +{
> + IopmpState *s = IOPMP(dev);
> +
> + qemu_set_irq(s->irq, 0);
> + memset(s->regs.srcmd_en, 0, s->rrid_num * sizeof(uint32_t));
> + memset(s->regs.srcmd_enh, 0, s->rrid_num * sizeof(uint32_t));
> + memset(s->entry_addr, 0, s->entry_num * sizeof(iopmp_addr_t));
> +
> + s->regs.mdlck = 0;
> + s->regs.mdlckh = 0;
> + s->regs.entrylck = 0;
> + s->regs.mdstall = 0;
> + s->regs.mdstallh = 0;
> + s->regs.rridscp = 0;
> + s->regs.err_cfg = 0;
> + s->regs.err_reqaddr = 0;
> + s->regs.err_reqid = 0;
> + s->regs.err_reqinfo = 0;
> +
> + s->prient_prog = s->default_prient_prog;
> + s->enable = 0;
> +
> + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> + s->regs.mdcfg[0] = s->k;
> +}
> +
> +static int iopmp_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
> +{
> + return attrs.requester_id;
> +}
> +
> +static void iopmp_iommu_memory_region_class_init(ObjectClass *klass, void *data)
> +{
> + IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
> +
> + imrc->translate = iopmp_translate;
> + imrc->attrs_to_index = iopmp_attrs_to_index;
> +}
> +
> +static Property iopmp_property[] = {
> + DEFINE_PROP_BOOL("prient_prog", IopmpState, default_prient_prog, true),
> + DEFINE_PROP_UINT32("k", IopmpState, k, 6),
> + DEFINE_PROP_UINT32("prio_entry", IopmpState, prio_entry, 48),
> + DEFINE_PROP_UINT32("rrid_num", IopmpState, rrid_num, 16),
> + DEFINE_PROP_UINT32("md_num", IopmpState, md_num, 8),
> + DEFINE_PROP_UINT32("entry_offset", IopmpState, entry_offset, 0x4000),
> + DEFINE_PROP_UINT32("fabricated_v", IopmpState, fabricated_v, 0x0),
> + DEFINE_PROP_END_OF_LIST(),
> +};
> +
> +static void iopmp_class_init(ObjectClass *klass, void *data)
> +{
> + DeviceClass *dc = DEVICE_CLASS(klass);
> + device_class_set_props(dc, iopmp_property);
> + dc->realize = iopmp_realize;
> + dc->reset = iopmp_reset;
> +}
> +
> +static void iopmp_init(Object *obj)
> +{
> + IopmpState *s = IOPMP(obj);
> + SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
> +
> + sysbus_init_irq(sbd, &s->irq);
> +}
> +
> +static const TypeInfo iopmp_info = {
> + .name = TYPE_IOPMP,
> + .parent = TYPE_SYS_BUS_DEVICE,
> + .instance_size = sizeof(IopmpState),
> + .instance_init = iopmp_init,
> + .class_init = iopmp_class_init,
> +};
> +
> +static const TypeInfo
> +iopmp_iommu_memory_region_info = {
> + .name = TYPE_IOPMP_IOMMU_MEMORY_REGION,
> + .parent = TYPE_IOMMU_MEMORY_REGION,
> + .class_init = iopmp_iommu_memory_region_class_init,
> +};
> +
> +static void
> +iopmp_register_types(void)
> +{
> + type_register_static(&iopmp_info);
> + type_register_static(&iopmp_iommu_memory_region_info);
> +}
> +
> +type_init(iopmp_register_types);
> diff --git a/hw/misc/trace-events b/hw/misc/trace-events
> index 1be0717c0c..c148166d2d 100644
> --- a/hw/misc/trace-events
> +++ b/hw/misc/trace-events
> @@ -362,3 +362,6 @@ aspeed_sli_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx
> aspeed_sliio_write(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
> aspeed_sliio_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
>
> +# riscv_iopmp.c
> +iopmp_read(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> +iopmp_write(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> new file mode 100644
> index 0000000000..b8fe479108
> --- /dev/null
> +++ b/include/hw/misc/riscv_iopmp.h
> @@ -0,0 +1,168 @@
> +/*
> + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> + *
> + * Copyright (c) 2023-2024 Andes Tech. Corp.
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef RISCV_IOPMP_H
> +#define RISCV_IOPMP_H
> +
> +#include "hw/sysbus.h"
> +#include "qemu/typedefs.h"
> +#include "memory.h"
> +#include "exec/hwaddr.h"
> +
> +#define TYPE_IOPMP "iopmp"
> +#define IOPMP(obj) OBJECT_CHECK(IopmpState, (obj), TYPE_IOPMP)
> +
> +#define IOPMP_MAX_MD_NUM 63
> +#define IOPMP_MAX_RRID_NUM 65535
> +#define IOPMP_MAX_ENTRY_NUM 65535
> +
> +#define VENDER_VIRT 0
> +#define SPECVER_0_9_1 91
> +#define IMPID_0_9_1 91
> +
> +#define RRE_ERROR 0
> +#define RRE_SUCCESS_VALUE 1
> +
> +#define RWE_ERROR 0
> +#define RWE_SUCCESS 1
> +
> +#define RXE_ERROR 0
> +#define RXE_SUCCESS_VALUE 1
> +
> +#define ERR_REQINFO_TTYPE_READ 1
> +#define ERR_REQINFO_TTYPE_WRITE 2
> +#define ERR_REQINFO_TTYPE_FETCH 3
> +#define ERR_REQINFO_ETYPE_NOERROR 0
> +#define ERR_REQINFO_ETYPE_READ 1
> +#define ERR_REQINFO_ETYPE_WRITE 2
> +#define ERR_REQINFO_ETYPE_FETCH 3
> +#define ERR_REQINFO_ETYPE_PARHIT 4
> +#define ERR_REQINFO_ETYPE_NOHIT 5
> +#define ERR_REQINFO_ETYPE_RRID 6
> +#define ERR_REQINFO_ETYPE_USER 7
> +
> +#define IOPMP_MODEL_FULL 0
> +#define IOPMP_MODEL_RAPIDK 0x1
> +#define IOPMP_MODEL_DYNAMICK 0x2
> +#define IOPMP_MODEL_ISOLATION 0x3
> +#define IOPMP_MODEL_COMPACTK 0x4
> +
> +#define ENTRY_NO_HIT 0
> +#define ENTRY_PAR_HIT 1
> +#define ENTRY_HIT 2
Why not an enum?
Alistair
> +
> +/* The generic iopmp address space which downstream is system memory */
> +extern AddressSpace iopmp_container_as;
> +
> +typedef enum {
> + IOPMP_AMATCH_OFF, /* Null (off) */
> + IOPMP_AMATCH_TOR, /* Top of Range */
> + IOPMP_AMATCH_NA4, /* Naturally aligned four-byte region */
> + IOPMP_AMATCH_NAPOT /* Naturally aligned power-of-two region */
> +} iopmp_am_t;
> +
> +typedef enum {
> + IOPMP_ACCESS_READ = 1,
> + IOPMP_ACCESS_WRITE = 2,
> + IOPMP_ACCESS_FETCH = 3
> +} iopmp_access_type;
> +
> +typedef enum {
> + IOPMP_NONE = 0,
> + IOPMP_RO = 1,
> + IOPMP_WO = 2,
> + IOPMP_RW = 3,
> + IOPMP_XO = 4,
> + IOPMP_RX = 5,
> + IOPMP_WX = 6,
> + IOPMP_RWX = 7,
> +} iopmp_permission;
> +
> +typedef struct {
> + uint32_t addr_reg;
> + uint32_t addrh_reg;
> + uint32_t cfg_reg;
> +} iopmp_entry_t;
> +
> +typedef struct {
> + uint64_t sa;
> + uint64_t ea;
> +} iopmp_addr_t;
> +
> +typedef struct {
> + uint32_t *srcmd_en;
> + uint32_t *srcmd_enh;
> + uint32_t *mdcfg;
> + iopmp_entry_t *entry;
> + uint32_t mdlck;
> + uint32_t mdlckh;
> + uint32_t entrylck;
> + uint32_t mdcfglck;
> + uint32_t mdstall;
> + uint32_t mdstallh;
> + uint32_t rridscp;
> + uint32_t err_cfg;
> + uint64_t err_reqaddr;
> + uint32_t err_reqid;
> + uint32_t err_reqinfo;
> +} iopmp_regs;
> +
> +
> +/* To detect partially hit */
> +typedef struct iopmp_transaction_state {
> + bool running;
> + bool supported;
> + hwaddr start_addr;
> + hwaddr end_addr;
> +} iopmp_transaction_state;
> +
> +typedef struct IopmpState {
> + SysBusDevice parent_obj;
> + iopmp_addr_t *entry_addr;
> + MemoryRegion mmio;
> + IOMMUMemoryRegion iommu;
> + IOMMUMemoryRegion *next_iommu;
> + iopmp_regs regs;
> + MemoryRegion *downstream;
> + MemoryRegion blocked_r, blocked_w, blocked_x, blocked_rw, blocked_rx,
> + blocked_wx, blocked_rwx;
> + MemoryRegion stall_io;
> + uint32_t model;
> + uint32_t k;
> + bool prient_prog;
> + bool default_prient_prog;
> + iopmp_transaction_state *transaction_state;
> + QemuMutex iopmp_transaction_mutex;
> +
> + AddressSpace downstream_as;
> + AddressSpace blocked_r_as, blocked_w_as, blocked_x_as, blocked_rw_as,
> + blocked_rx_as, blocked_wx_as, blocked_rwx_as;
> + qemu_irq irq;
> + bool enable;
> +
> + uint32_t prio_entry;
> + uint32_t rrid_num;
> + uint32_t md_num;
> + uint32_t entry_num;
> + uint32_t entry_offset;
> + uint32_t fabricated_v;
> +} IopmpState;
> +
> +#endif
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support
2024-07-15 10:14 ` [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support Ethan Chen via
@ 2024-08-08 4:01 ` Alistair Francis
2024-08-09 10:14 ` Ethan Chen via
0 siblings, 1 reply; 27+ messages in thread
From: Alistair Francis @ 2024-08-08 4:01 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Jul 15, 2024 at 8:15 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
>
> - Add 'iopmp=on' option to enable IOPMP. It adds an iopmp device virt machine
> to protect all regions of system memory, and configures RRID of CPU.
>
> Signed-off-by: Ethan Chen <ethan84@andestech.com>
> ---
> docs/system/riscv/virt.rst | 5 +++
> hw/riscv/Kconfig | 1 +
> hw/riscv/virt.c | 63 ++++++++++++++++++++++++++++++++++++++
> include/hw/riscv/virt.h | 5 ++-
> 4 files changed, 73 insertions(+), 1 deletion(-)
>
> diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
> index 9a06f95a34..9fd006ccc2 100644
> --- a/docs/system/riscv/virt.rst
> +++ b/docs/system/riscv/virt.rst
> @@ -116,6 +116,11 @@ The following machine-specific options are supported:
> having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
> the default number of per-HART VS-level AIA IMSIC pages is 0.
>
> +- iopmp=[on|off]
> +
> + When this option is "on", an IOPMP device is added to machine. IOPMP checks
> + memory transcations in system memory. This option is assumed to be "off".
We probably should have a a little more here. You don't even mention
that this is the rapid-k model.
It might be worth adding a `model` field, to make it easier to add
other models in the future. Thoughts?
Alistair
> +
> Running Linux kernel
> --------------------
>
> diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig
> index a2030e3a6f..0b45a5ade2 100644
> --- a/hw/riscv/Kconfig
> +++ b/hw/riscv/Kconfig
> @@ -56,6 +56,7 @@ config RISCV_VIRT
> select PLATFORM_BUS
> select ACPI
> select ACPI_PCI
> + select RISCV_IOPMP
>
> config SHAKTI_C
> bool
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index bc0893e087..5a03c03c4a 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -55,6 +55,7 @@
> #include "hw/acpi/aml-build.h"
> #include "qapi/qapi-visit-common.h"
> #include "hw/virtio/virtio-iommu.h"
> +#include "hw/misc/riscv_iopmp.h"
>
> /* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated by QEMU. */
> static bool virt_use_kvm_aia(RISCVVirtState *s)
> @@ -82,6 +83,7 @@ static const MemMapEntry virt_memmap[] = {
> [VIRT_UART0] = { 0x10000000, 0x100 },
> [VIRT_VIRTIO] = { 0x10001000, 0x1000 },
> [VIRT_FW_CFG] = { 0x10100000, 0x18 },
> + [VIRT_IOPMP] = { 0x10200000, 0x100000 },
> [VIRT_FLASH] = { 0x20000000, 0x4000000 },
> [VIRT_IMSIC_M] = { 0x24000000, VIRT_IMSIC_MAX_SIZE },
> [VIRT_IMSIC_S] = { 0x28000000, VIRT_IMSIC_MAX_SIZE },
> @@ -90,6 +92,11 @@ static const MemMapEntry virt_memmap[] = {
> [VIRT_DRAM] = { 0x80000000, 0x0 },
> };
>
> +static const MemMapEntry iopmp_protect_memmap[] = {
> + /* IOPMP protect all regions by default */
> + {0, 0xFFFFFFFF},
> +};
> +
> /* PCIe high mmio is fixed for RV32 */
> #define VIRT32_HIGH_PCIE_MMIO_BASE 0x300000000ULL
> #define VIRT32_HIGH_PCIE_MMIO_SIZE (4 * GiB)
> @@ -1024,6 +1031,24 @@ static void create_fdt_virtio_iommu(RISCVVirtState *s, uint16_t bdf)
> bdf + 1, iommu_phandle, bdf + 1, 0xffff - bdf);
> }
>
> +static void create_fdt_iopmp(RISCVVirtState *s, const MemMapEntry *memmap,
> + uint32_t irq_mmio_phandle) {
> + g_autofree char *name = NULL;
> + MachineState *ms = MACHINE(s);
> +
> + name = g_strdup_printf("/soc/iopmp@%lx", (long)memmap[VIRT_IOPMP].base);
> + qemu_fdt_add_subnode(ms->fdt, name);
> + qemu_fdt_setprop_string(ms->fdt, name, "compatible", "riscv_iopmp");
> + qemu_fdt_setprop_cells(ms->fdt, name, "reg", 0x0, memmap[VIRT_IOPMP].base,
> + 0x0, memmap[VIRT_IOPMP].size);
> + qemu_fdt_setprop_cell(ms->fdt, name, "interrupt-parent", irq_mmio_phandle);
> + if (s->aia_type == VIRT_AIA_TYPE_NONE) {
> + qemu_fdt_setprop_cell(ms->fdt, name, "interrupts", IOPMP_IRQ);
> + } else {
> + qemu_fdt_setprop_cells(ms->fdt, name, "interrupts", IOPMP_IRQ, 0x4);
> + }
> +}
> +
> static void finalize_fdt(RISCVVirtState *s)
> {
> uint32_t phandle = 1, irq_mmio_phandle = 1, msi_pcie_phandle = 1;
> @@ -1042,6 +1067,10 @@ static void finalize_fdt(RISCVVirtState *s)
> create_fdt_uart(s, virt_memmap, irq_mmio_phandle);
>
> create_fdt_rtc(s, virt_memmap, irq_mmio_phandle);
> +
> + if (s->have_iopmp) {
> + create_fdt_iopmp(s, virt_memmap, irq_mmio_phandle);
> + }
> }
>
> static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap)
> @@ -1425,6 +1454,7 @@ static void virt_machine_init(MachineState *machine)
> DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip;
> int i, base_hartid, hart_count;
> int socket_count = riscv_socket_count(machine);
> + int cpu, socket;
>
> /* Check socket count limit */
> if (VIRT_SOCKETS_MAX < socket_count) {
> @@ -1606,6 +1636,19 @@ static void virt_machine_init(MachineState *machine)
> }
> virt_flash_map(s, system_memory);
>
> + if (s->have_iopmp) {
> + DeviceState *iopmp_dev = sysbus_create_simple(TYPE_IOPMP,
> + memmap[VIRT_IOPMP].base,
> + qdev_get_gpio_in(DEVICE(mmio_irqchip), IOPMP_IRQ));
> +
> + for (socket = 0; socket < socket_count; socket++) {
> + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> + iopmp_setup_cpu(&s->soc[socket].harts[cpu], 0);
> + }
> + }
> + iopmp_setup_system_memory(iopmp_dev, iopmp_protect_memmap, 1);
> + }
> +
> /* load/create device tree */
> if (machine->dtb) {
> machine->fdt = load_device_tree(machine->dtb, &s->fdt_size);
> @@ -1702,6 +1745,20 @@ static void virt_set_aclint(Object *obj, bool value, Error **errp)
> s->have_aclint = value;
> }
>
> +static bool virt_get_iopmp(Object *obj, Error **errp)
> +{
> + RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
> +
> + return s->have_iopmp;
> +}
> +
> +static void virt_set_iopmp(Object *obj, bool value, Error **errp)
> +{
> + RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
> +
> + s->have_iopmp = value;
> +}
> +
> bool virt_is_acpi_enabled(RISCVVirtState *s)
> {
> return s->acpi != ON_OFF_AUTO_OFF;
> @@ -1814,6 +1871,12 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
> NULL, NULL);
> object_class_property_set_description(oc, "acpi",
> "Enable ACPI");
> +
> + object_class_property_add_bool(oc, "iopmp", virt_get_iopmp,
> + virt_set_iopmp);
> + object_class_property_set_description(oc, "iopmp",
> + "Set on/off to enable/disable "
> + "iopmp device");
> }
>
> static const TypeInfo virt_machine_typeinfo = {
> diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
> index c0dc41ff9a..009b4ebea7 100644
> --- a/include/hw/riscv/virt.h
> +++ b/include/hw/riscv/virt.h
> @@ -55,6 +55,7 @@ struct RISCVVirtState {
>
> int fdt_size;
> bool have_aclint;
> + bool have_iopmp;
> RISCVVirtAIAType aia_type;
> int aia_guests;
> char *oem_id;
> @@ -84,12 +85,14 @@ enum {
> VIRT_PCIE_MMIO,
> VIRT_PCIE_PIO,
> VIRT_PLATFORM_BUS,
> - VIRT_PCIE_ECAM
> + VIRT_PCIE_ECAM,
> + VIRT_IOPMP,
> };
>
> enum {
> UART0_IRQ = 10,
> RTC_IRQ = 11,
> + IOPMP_IRQ = 12,
> VIRTIO_IRQ = 1, /* 1 to 8 */
> VIRTIO_COUNT = 8,
> PCIE_IRQ = 0x20, /* 32 to 35 */
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 2/8] system/physmem: Support IOMMU granularity smaller than TARGET_PAGE size
2024-07-15 9:56 ` [PATCH v8 2/8] system/physmem: Support IOMMU granularity smaller than TARGET_PAGE size Ethan Chen via
@ 2024-08-08 4:12 ` Alistair Francis
0 siblings, 0 replies; 27+ messages in thread
From: Alistair Francis @ 2024-08-08 4:12 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Jul 15, 2024 at 7:59 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
>
> If the IOMMU granularity is smaller than the TARGET_PAGE size, there may be
> multiple entries within the same page. To obtain the correct result, pass
> the original address to the IOMMU.
>
> Similar to the RISC-V PMP solution, the TLB_INVALID_MASK will be set when
> there are multiple entries in the same page, ensuring that the IOMMU is
> checked on every access.
>
> Signed-off-by: Ethan Chen <ethan84@andestech.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> accel/tcg/cputlb.c | 20 ++++++++++++++++----
> system/physmem.c | 4 ++++
> 2 files changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index edb3715017..7df106fea3 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1062,8 +1062,23 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
>
> prot = full->prot;
> asidx = cpu_asidx_from_attrs(cpu, full->attrs);
> - section = address_space_translate_for_iotlb(cpu, asidx, paddr_page,
> + section = address_space_translate_for_iotlb(cpu, asidx, full->phys_addr,
> &xlat, &sz, full->attrs, &prot);
> + /* Update page size */
> + full->lg_page_size = ctz64(sz);
> + if (full->lg_page_size > TARGET_PAGE_BITS) {
> + full->lg_page_size = TARGET_PAGE_BITS;
> + } else {
> + sz = TARGET_PAGE_SIZE;
> + }
> +
> + is_ram = memory_region_is_ram(section->mr);
> + is_romd = memory_region_is_romd(section->mr);
> + /* If the translated mr is ram/rom, make xlat align the TARGET_PAGE */
> + if (is_ram || is_romd) {
> + xlat &= TARGET_PAGE_MASK;
> + }
> +
> assert(sz >= TARGET_PAGE_SIZE);
>
> tlb_debug("vaddr=%016" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx
> @@ -1076,9 +1091,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
> read_flags |= TLB_INVALID_MASK;
> }
>
> - is_ram = memory_region_is_ram(section->mr);
> - is_romd = memory_region_is_romd(section->mr);
> -
> if (is_ram || is_romd) {
> /* RAM and ROMD both have associated host memory. */
> addend = (uintptr_t)memory_region_get_ram_ptr(section->mr) + xlat;
> diff --git a/system/physmem.c b/system/physmem.c
> index 2154432cb6..346b015447 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -702,6 +702,10 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr,
> iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx);
> addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
> | (addr & iotlb.addr_mask));
> + /* Update size */
> + if (iotlb.addr_mask != -1 && *plen > iotlb.addr_mask + 1) {
> + *plen = iotlb.addr_mask + 1;
> + }
> /* Update the caller's prot bits to remove permissions the IOMMU
> * is giving us a failure response for. If we get down to no
> * permissions left at all we can give up now.
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 3/8] target/riscv: Add support for IOPMP
2024-07-15 9:56 ` [PATCH v8 3/8] target/riscv: Add support for IOPMP Ethan Chen via
@ 2024-08-08 4:13 ` Alistair Francis
0 siblings, 0 replies; 27+ messages in thread
From: Alistair Francis @ 2024-08-08 4:13 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Jul 15, 2024 at 7:58 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
>
> Signed-off-by: Ethan Chen <ethan84@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu_cfg.h | 2 ++
> target/riscv/cpu_helper.c | 18 +++++++++++++++---
> 2 files changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
> index fb7eebde52..2946fec20c 100644
> --- a/target/riscv/cpu_cfg.h
> +++ b/target/riscv/cpu_cfg.h
> @@ -164,6 +164,8 @@ struct RISCVCPUConfig {
> bool pmp;
> bool debug;
> bool misa_w;
> + bool iopmp;
> + uint32_t iopmp_rrid;
>
> bool short_isa_string;
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 6709622dd3..c2d6a874da 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -1418,9 +1418,21 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> }
>
> if (ret == TRANSLATE_SUCCESS) {
> - tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> - prot, mmu_idx, tlb_size);
> - return true;
> + if (cpu->cfg.iopmp) {
> + /*
> + * Do not align address on early stage because IOPMP needs origin
> + * address for permission check.
> + */
> + tlb_set_page_with_attrs(cs, address, pa,
> + (MemTxAttrs)
> + {
> + .requester_id = cpu->cfg.iopmp_rrid,
> + },
> + prot, mmu_idx, tlb_size);
> + } else {
> + tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> + prot, mmu_idx, tlb_size);
> + }
> } else if (probe) {
> return false;
> } else {
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory
2024-07-15 10:12 ` [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory Ethan Chen via
@ 2024-08-08 4:23 ` Alistair Francis
2024-08-09 10:11 ` Ethan Chen via
0 siblings, 1 reply; 27+ messages in thread
From: Alistair Francis @ 2024-08-08 4:23 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Jul 15, 2024 at 8:13 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
>
> To enable system memory transactions through the IOPMP, memory regions must
> be moved to the IOPMP downstream and then replaced with IOMMUs for IOPMP
> translation.
>
> The iopmp_setup_system_memory() function copies subregions of system memory
> to create the IOPMP downstream and then replaces the specified memory
> regions in system memory with the IOMMU regions of the IOPMP. It also
> adds entries to a protection map that records the relationship between
> physical address regions and the IOPMP, which is used by the IOPMP DMA
> API to send transaction information.
>
> Signed-off-by: Ethan Chen <ethan84@andestech.com>
> ---
> hw/misc/riscv_iopmp.c | 61 +++++++++++++++++++++++++++++++++++
> include/hw/misc/riscv_iopmp.h | 3 ++
> 2 files changed, 64 insertions(+)
>
> diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> index db43e3c73f..e62ac57437 100644
> --- a/hw/misc/riscv_iopmp.c
> +++ b/hw/misc/riscv_iopmp.c
> @@ -1151,4 +1151,65 @@ iopmp_register_types(void)
> type_register_static(&iopmp_iommu_memory_region_info);
> }
>
> +/*
> + * Copies subregions from the source memory region to the destination memory
> + * region
> + */
> +static void copy_memory_subregions(MemoryRegion *src_mr, MemoryRegion *dst_mr)
> +{
> + int32_t priority;
> + hwaddr addr;
> + MemoryRegion *alias, *subregion;
> + QTAILQ_FOREACH(subregion, &src_mr->subregions, subregions_link) {
> + priority = subregion->priority;
> + addr = subregion->addr;
> + alias = g_malloc0(sizeof(MemoryRegion));
> + memory_region_init_alias(alias, NULL, subregion->name, subregion, 0,
> + memory_region_size(subregion));
> + memory_region_add_subregion_overlap(dst_mr, addr, alias, priority);
> + }
> +}
This seems strange. Do we really need to do this?
I haven't looked at the memory_region stuff for awhile, but this seems
clunky and prone to breakage.
We already link s->iommu with the system memory
Alistair
> +
> +/*
> + * Create downstream of system memory for IOPMP, and overlap memory region
> + * specified in memmap with IOPMP translator. Make sure subregions are added to
> + * system memory before call this function. It also add entry to
> + * iopmp_protection_memmaps for recording the relationship between physical
> + * address regions and IOPMP.
> + */
> +void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> + uint32_t map_entry_num)
> +{
> + IopmpState *s = IOPMP(dev);
> + uint32_t i;
> + MemoryRegion *iommu_alias;
> + MemoryRegion *target_mr = get_system_memory();
> + MemoryRegion *downstream = g_malloc0(sizeof(MemoryRegion));
> + memory_region_init(downstream, NULL, "iopmp_downstream",
> + memory_region_size(target_mr));
> + /* Copy subregions of target to downstream */
> + copy_memory_subregions(target_mr, downstream);
> +
> + iopmp_protection_memmap *map;
> + for (i = 0; i < map_entry_num; i++) {
> + /* Memory access to protected regions of target are through IOPMP */
> + iommu_alias = g_new(MemoryRegion, 1);
> + memory_region_init_alias(iommu_alias, NULL, "iommu_alias",
> + MEMORY_REGION(&s->iommu), memmap[i].base,
> + memmap[i].size);
> + memory_region_add_subregion_overlap(target_mr, memmap[i].base,
> + iommu_alias, 1);
> + /* Record which IOPMP is responsible for the region */
> + map = g_new0(iopmp_protection_memmap, 1);
> + map->iopmp_s = s;
> + map->entry.base = memmap[i].base;
> + map->entry.size = memmap[i].size;
> + QLIST_INSERT_HEAD(&iopmp_protection_memmaps, map, list);
> + }
> + s->downstream = downstream;
> + address_space_init(&s->downstream_as, s->downstream,
> + "iopmp-downstream-as");
> +}
> +
> +
> type_init(iopmp_register_types);
> diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> index b8fe479108..ebe9c4bc4a 100644
> --- a/include/hw/misc/riscv_iopmp.h
> +++ b/include/hw/misc/riscv_iopmp.h
> @@ -165,4 +165,7 @@ typedef struct IopmpState {
> uint32_t fabricated_v;
> } IopmpState;
>
> +void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> + uint32_t mapentry_num);
> +
> #endif
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 6/8] hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support
2024-07-15 10:14 ` [PATCH v8 6/8] hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support Ethan Chen via
@ 2024-08-08 4:25 ` Alistair Francis
2024-08-09 9:56 ` Ethan Chen via
0 siblings, 1 reply; 27+ messages in thread
From: Alistair Francis @ 2024-08-08 4:25 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Jul 15, 2024 at 8:15 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
>
> The iopmp_setup_cpu() function configures the RISCV CPU to support IOPMP and
> specifies the CPU's RRID.
>
> Signed-off-by: Ethan Chen <ethan84@andestech.com>
> ---
> hw/misc/riscv_iopmp.c | 6 ++++++
> include/hw/misc/riscv_iopmp.h | 1 +
> 2 files changed, 7 insertions(+)
>
> diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> index e62ac57437..374bf5c610 100644
> --- a/hw/misc/riscv_iopmp.c
> +++ b/hw/misc/riscv_iopmp.c
> @@ -1211,5 +1211,11 @@ void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> "iopmp-downstream-as");
> }
>
> +void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid)
> +{
> + cpu->cfg.iopmp = true;
> + cpu->cfg.iopmp_rrid = rrid;
> +}
This should just be a normal CPU property, which the machine can then
set to true if required
Alistair
> +
>
> type_init(iopmp_register_types);
> diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> index ebe9c4bc4a..7e7da56d10 100644
> --- a/include/hw/misc/riscv_iopmp.h
> +++ b/include/hw/misc/riscv_iopmp.h
> @@ -167,5 +167,6 @@ typedef struct IopmpState {
>
> void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> uint32_t mapentry_num);
> +void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid);
>
> #endif
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device
2024-08-08 3:56 ` Alistair Francis
@ 2024-08-09 9:42 ` Ethan Chen via
2024-08-12 0:42 ` Alistair Francis
2024-08-09 10:03 ` Ethan Chen via
1 sibling, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-08-09 9:42 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Thu, Aug 08, 2024 at 01:56:35PM +1000, Alistair Francis wrote:
> [EXTERNAL MAIL]
>
> On Mon, Jul 15, 2024 at 7:58 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> >
> > Support basic functions of IOPMP specification v0.9.1 rapid-k model.
> > The specification url:
> > https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
> >
> > The IOPMP checks whether memory access from a device or CPU is valid.
> > This implementation uses an IOMMU to modify the address space accessed
> > by the device.
> >
> > For device access with IOMMUAccessFlags specifying read or write
> > (IOMMU_RO or IOMMU_WO), the IOPMP checks the permission in
> > iopmp_translate. If the access is valid, the target address space is
> > downstream_as. If the access is blocked, it will be redirected to
> > blocked_rwx_as.
> >
> > For CPU access with IOMMUAccessFlags not specifying read or write
> > (IOMMU_NONE), the IOPMP translates the access to the corresponding
> > address space based on the permission. If the access has full permission
> > (rwx), the target address space is downstream_as. If the access has
> > limited permission, the target address space is blocked_ followed by
> > the lacked permissions.
> >
> > The operation of a blocked region can trigger an IOPMP interrupt, a bus
> > error, or it can respond with success and fabricated data, depending on
> > the value of the IOPMP ERR_CFG register.
> >
> > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > ---
> > hw/misc/Kconfig | 3 +
> > hw/misc/meson.build | 1 +
> > hw/misc/riscv_iopmp.c | 1154 +++++++++++++++++++++++++++++++++
> > hw/misc/trace-events | 3 +
> > include/hw/misc/riscv_iopmp.h | 168 +++++
> > 5 files changed, 1329 insertions(+)
> > create mode 100644 hw/misc/riscv_iopmp.c
> > create mode 100644 include/hw/misc/riscv_iopmp.h
> >
> > diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
> > index 1e08785b83..427f0c702e 100644
> > --- a/hw/misc/Kconfig
> > +++ b/hw/misc/Kconfig
> > @@ -213,4 +213,7 @@ config IOSB
> > config XLNX_VERSAL_TRNG
> > bool
> >
> > +config RISCV_IOPMP
> > + bool
> > +
> > source macio/Kconfig
> > diff --git a/hw/misc/meson.build b/hw/misc/meson.build
> > index 2ca8717be2..d9006e1d81 100644
> > --- a/hw/misc/meson.build
> > +++ b/hw/misc/meson.build
> > @@ -34,6 +34,7 @@ system_ss.add(when: 'CONFIG_SIFIVE_E_PRCI', if_true: files('sifive_e_prci.c'))
> > system_ss.add(when: 'CONFIG_SIFIVE_E_AON', if_true: files('sifive_e_aon.c'))
> > system_ss.add(when: 'CONFIG_SIFIVE_U_OTP', if_true: files('sifive_u_otp.c'))
> > system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c'))
> > +specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp.c'))
> >
> > subdir('macio')
> >
> > diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> > new file mode 100644
> > index 0000000000..db43e3c73f
> > --- /dev/null
> > +++ b/hw/misc/riscv_iopmp.c
> > @@ -0,0 +1,1154 @@
> > +/*
> > + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> > + *
> > + * Copyright (c) 2023-2024 Andes Tech. Corp.
> > + *
> > + * SPDX-License-Identifier: GPL-2.0-or-later
> > + *
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms and conditions of the GNU General Public License,
> > + * version 2 or later, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> > + * more details.
> > + *
> > + * this program. If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "qemu/log.h"
> > +#include "qapi/error.h"
> > +#include "trace.h"
> > +#include "exec/exec-all.h"
> > +#include "exec/address-spaces.h"
> > +#include "hw/qdev-properties.h"
> > +#include "hw/sysbus.h"
> > +#include "hw/misc/riscv_iopmp.h"
> > +#include "memory.h"
> > +#include "hw/irq.h"
> > +#include "hw/registerfields.h"
> > +#include "trace.h"
> > +
> > +#define TYPE_IOPMP_IOMMU_MEMORY_REGION "iopmp-iommu-memory-region"
> > +
> > +REG32(VERSION, 0x00)
> > + FIELD(VERSION, VENDOR, 0, 24)
> > + FIELD(VERSION, SPECVER , 24, 8)
> > +REG32(IMP, 0x04)
> > + FIELD(IMP, IMPID, 0, 32)
> > +REG32(HWCFG0, 0x08)
> > + FIELD(HWCFG0, MODEL, 0, 4)
> > + FIELD(HWCFG0, TOR_EN, 4, 1)
> > + FIELD(HWCFG0, SPS_EN, 5, 1)
> > + FIELD(HWCFG0, USER_CFG_EN, 6, 1)
> > + FIELD(HWCFG0, PRIENT_PROG, 7, 1)
> > + FIELD(HWCFG0, RRID_TRANSL_EN, 8, 1)
> > + FIELD(HWCFG0, RRID_TRANSL_PROG, 9, 1)
> > + FIELD(HWCFG0, CHK_X, 10, 1)
> > + FIELD(HWCFG0, NO_X, 11, 1)
> > + FIELD(HWCFG0, NO_W, 12, 1)
> > + FIELD(HWCFG0, STALL_EN, 13, 1)
> > + FIELD(HWCFG0, PEIS, 14, 1)
> > + FIELD(HWCFG0, PEES, 15, 1)
> > + FIELD(HWCFG0, MFR_EN, 16, 1)
> > + FIELD(HWCFG0, MD_NUM, 24, 7)
> > + FIELD(HWCFG0, ENABLE, 31, 1)
> > +REG32(HWCFG1, 0x0C)
> > + FIELD(HWCFG1, RRID_NUM, 0, 16)
> > + FIELD(HWCFG1, ENTRY_NUM, 16, 16)
> > +REG32(HWCFG2, 0x10)
> > + FIELD(HWCFG2, PRIO_ENTRY, 0, 16)
> > + FIELD(HWCFG2, RRID_TRANSL, 16, 16)
> > +REG32(ENTRYOFFSET, 0x14)
> > + FIELD(ENTRYOFFSET, OFFSET, 0, 32)
> > +REG32(MDSTALL, 0x30)
> > + FIELD(MDSTALL, EXEMPT, 0, 1)
> > + FIELD(MDSTALL, MD, 1, 31)
> > +REG32(MDSTALLH, 0x34)
> > + FIELD(MDSTALLH, MD, 0, 32)
> > +REG32(RRIDSCP, 0x38)
> > + FIELD(RRIDSCP, RRID, 0, 16)
> > + FIELD(RRIDSCP, OP, 30, 2)
> > +REG32(MDLCK, 0x40)
> > + FIELD(MDLCK, L, 0, 1)
> > + FIELD(MDLCK, MD, 1, 31)
> > +REG32(MDLCKH, 0x44)
> > + FIELD(MDLCKH, MDH, 0, 32)
> > +REG32(MDCFGLCK, 0x48)
> > + FIELD(MDCFGLCK, L, 0, 1)
> > + FIELD(MDCFGLCK, F, 1, 7)
> > +REG32(ENTRYLCK, 0x4C)
> > + FIELD(ENTRYLCK, L, 0, 1)
> > + FIELD(ENTRYLCK, F, 1, 16)
> > +REG32(ERR_CFG, 0x60)
> > + FIELD(ERR_CFG, L, 0, 1)
> > + FIELD(ERR_CFG, IE, 1, 1)
> > + FIELD(ERR_CFG, IRE, 2, 1)
> > + FIELD(ERR_CFG, IWE, 3, 1)
> > + FIELD(ERR_CFG, IXE, 4, 1)
> > + FIELD(ERR_CFG, RRE, 5, 1)
> > + FIELD(ERR_CFG, RWE, 6, 1)
> > + FIELD(ERR_CFG, RXE, 7, 1)
> > +REG32(ERR_REQINFO, 0x64)
> > + FIELD(ERR_REQINFO, V, 0, 1)
> > + FIELD(ERR_REQINFO, TTYPE, 1, 2)
> > + FIELD(ERR_REQINFO, ETYPE, 4, 3)
> > + FIELD(ERR_REQINFO, SVC, 7, 1)
> > +REG32(ERR_REQADDR, 0x68)
> > + FIELD(ERR_REQADDR, ADDR, 0, 32)
> > +REG32(ERR_REQADDRH, 0x6C)
> > + FIELD(ERR_REQADDRH, ADDRH, 0, 32)
> > +REG32(ERR_REQID, 0x70)
> > + FIELD(ERR_REQID, RRID, 0, 16)
> > + FIELD(ERR_REQID, EID, 16, 16)
> > +REG32(ERR_MFR, 0x74)
> > + FIELD(ERR_MFR, SVW, 0, 16)
> > + FIELD(ERR_MFR, SVI, 16, 12)
> > + FIELD(ERR_MFR, SVS, 31, 1)
> > +REG32(MDCFG0, 0x800)
> > + FIELD(MDCFG0, T, 0, 16)
> > +REG32(SRCMD_EN0, 0x1000)
> > + FIELD(SRCMD_EN0, L, 0, 1)
> > + FIELD(SRCMD_EN0, MD, 1, 31)
> > +REG32(SRCMD_ENH0, 0x1004)
> > + FIELD(SRCMD_ENH0, MDH, 0, 32)
> > +REG32(SRCMD_R0, 0x1008)
> > + FIELD(SRCMD_R0, MD, 1, 31)
> > +REG32(SRCMD_RH0, 0x100C)
> > + FIELD(SRCMD_RH0, MDH, 0, 32)
> > +REG32(SRCMD_W0, 0x1010)
> > + FIELD(SRCMD_W0, MD, 1, 31)
> > +REG32(SRCMD_WH0, 0x1014)
> > + FIELD(SRCMD_WH0, MDH, 0, 32)
> > +
> > +FIELD(ENTRY_ADDR, ADDR, 0, 32)
> > +FIELD(ENTRY_ADDRH, ADDRH, 0, 32)
> > +
> > +FIELD(ENTRY_CFG, R, 0, 1)
> > +FIELD(ENTRY_CFG, W, 1, 1)
> > +FIELD(ENTRY_CFG, X, 2, 1)
> > +FIELD(ENTRY_CFG, A, 3, 2)
> > +FIELD(ENTRY_CFG, SIRE, 5, 1)
> > +FIELD(ENTRY_CFG, SIWE, 6, 1)
> > +FIELD(ENTRY_CFG, SIXE, 7, 1)
> > +FIELD(ENTRY_CFG, SERE, 8, 1)
> > +FIELD(ENTRY_CFG, SEWE, 9, 1)
> > +FIELD(ENTRY_CFG, SEXE, 10, 1)
> > +
> > +FIELD(ENTRY_USER_CFG, IM, 0, 32)
> > +
> > +/* Offsets to SRCMD_EN(i) */
> > +#define SRCMD_EN_OFFSET 0x0
> > +#define SRCMD_ENH_OFFSET 0x4
> > +#define SRCMD_R_OFFSET 0x8
> > +#define SRCMD_RH_OFFSET 0xC
> > +#define SRCMD_W_OFFSET 0x10
> > +#define SRCMD_WH_OFFSET 0x14
> > +
> > +/* Offsets to ENTRY_ADDR(i) */
> > +#define ENTRY_ADDR_OFFSET 0x0
> > +#define ENTRY_ADDRH_OFFSET 0x4
> > +#define ENTRY_CFG_OFFSET 0x8
> > +#define ENTRY_USER_CFG_OFFSET 0xC
> > +
> > +/* Memmap for parallel IOPMPs */
> > +typedef struct iopmp_protection_memmap {
> > + MemMapEntry entry;
> > + IopmpState *iopmp_s;
> > + QLIST_ENTRY(iopmp_protection_memmap) list;
> > +} iopmp_protection_memmap;
> > +QLIST_HEAD(, iopmp_protection_memmap)
> > + iopmp_protection_memmaps = QLIST_HEAD_INITIALIZER(iopmp_protection_memmaps);
> > +
> > +static void iopmp_iommu_notify(IopmpState *s)
> > +{
> > + IOMMUTLBEvent event = {
> > + .entry = {
> > + .iova = 0,
> > + .translated_addr = 0,
> > + .addr_mask = -1ULL,
> > + .perm = IOMMU_NONE,
> > + },
> > + .type = IOMMU_NOTIFIER_UNMAP,
> > + };
> > +
> > + for (int i = 0; i < s->rrid_num; i++) {
> > + memory_region_notify_iommu(&s->iommu, i, event);
> > + }
> > +}
> > +
> > +static void iopmp_decode_napot(uint64_t a, uint64_t *sa,
> > + uint64_t *ea)
> > +{
> > + /*
> > + * aaaa...aaa0 8-byte NAPOT range
> > + * aaaa...aa01 16-byte NAPOT range
> > + * aaaa...a011 32-byte NAPOT range
> > + * ...
> > + * aa01...1111 2^XLEN-byte NAPOT range
> > + * a011...1111 2^(XLEN+1)-byte NAPOT range
> > + * 0111...1111 2^(XLEN+2)-byte NAPOT range
> > + * 1111...1111 Reserved
> > + */
> > +
> > + a = (a << 2) | 0x3;
> > + *sa = a & (a + 1);
> > + *ea = a | (a + 1);
> > +}
> > +
> > +static void iopmp_update_rule(IopmpState *s, uint32_t entry_index)
> > +{
> > + uint8_t this_cfg = s->regs.entry[entry_index].cfg_reg;
> > + uint64_t this_addr = s->regs.entry[entry_index].addr_reg |
> > + ((uint64_t)s->regs.entry[entry_index].addrh_reg << 32);
> > + uint64_t prev_addr = 0u;
> > + uint64_t sa = 0u;
> > + uint64_t ea = 0u;
> > +
> > + if (entry_index >= 1u) {
> > + prev_addr = s->regs.entry[entry_index - 1].addr_reg |
> > + ((uint64_t)s->regs.entry[entry_index - 1].addrh_reg << 32);
> > + }
> > +
> > + switch (FIELD_EX32(this_cfg, ENTRY_CFG, A)) {
> > + case IOPMP_AMATCH_OFF:
> > + sa = 0u;
> > + ea = -1;
> > + break;
> > +
> > + case IOPMP_AMATCH_TOR:
> > + sa = (prev_addr) << 2; /* shift up from [xx:0] to [xx+2:2] */
> > + ea = ((this_addr) << 2) - 1u;
> > + if (sa > ea) {
> > + sa = ea = 0u;
> > + }
> > + break;
> > +
> > + case IOPMP_AMATCH_NA4:
> > + sa = this_addr << 2; /* shift up from [xx:0] to [xx+2:2] */
> > + ea = (sa + 4u) - 1u;
> > + break;
> > +
> > + case IOPMP_AMATCH_NAPOT:
> > + iopmp_decode_napot(this_addr, &sa, &ea);
> > + break;
> > +
> > + default:
> > + sa = 0u;
> > + ea = 0u;
> > + break;
> > + }
> > +
> > + s->entry_addr[entry_index].sa = sa;
> > + s->entry_addr[entry_index].ea = ea;
> > + iopmp_iommu_notify(s);
> > +}
> > +
> > +static uint64_t iopmp_read(void *opaque, hwaddr addr, unsigned size)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + uint32_t rz = 0;
> > + uint32_t offset, idx;
> > +
> > + switch (addr) {
> > + case A_VERSION:
> > + rz = VENDER_VIRT << R_VERSION_VENDOR_SHIFT |
> > + SPECVER_0_9_1 << R_VERSION_SPECVER_SHIFT;
>
> It would be better to use the FIELD_DP32() macro instead of the manual shifts
It will be refined in next revision.
>
> > + break;
> > + case A_IMP:
> > + rz = IMPID_0_9_1;
> > + break;
> > + case A_HWCFG0:
> > + rz = s->model << R_HWCFG0_MODEL_SHIFT |
> > + 1 << R_HWCFG0_TOR_EN_SHIFT |
> > + 0 << R_HWCFG0_SPS_EN_SHIFT |
> > + 0 << R_HWCFG0_USER_CFG_EN_SHIFT |
> > + s->prient_prog << R_HWCFG0_PRIENT_PROG_SHIFT |
> > + 0 << R_HWCFG0_RRID_TRANSL_EN_SHIFT |
> > + 0 << R_HWCFG0_RRID_TRANSL_PROG_SHIFT |
> > + 1 << R_HWCFG0_CHK_X_SHIFT |
> > + 0 << R_HWCFG0_NO_X_SHIFT |
> > + 0 << R_HWCFG0_NO_W_SHIFT |
> > + 0 << R_HWCFG0_STALL_EN_SHIFT |
> > + 0 << R_HWCFG0_PEIS_SHIFT |
> > + 0 << R_HWCFG0_PEES_SHIFT |
> > + 0 << R_HWCFG0_MFR_EN_SHIFT |
> > + s->md_num << R_HWCFG0_MD_NUM_SHIFT |
> > + s->enable << R_HWCFG0_ENABLE_SHIFT ;
> > + break;
> > + case A_HWCFG1:
> > + rz = s->rrid_num << R_HWCFG1_RRID_NUM_SHIFT |
> > + s->entry_num << R_HWCFG1_ENTRY_NUM_SHIFT;
> > + break;
> > + case A_HWCFG2:
> > + rz = s->prio_entry << R_HWCFG2_PRIO_ENTRY_SHIFT;
> > + break;
> > + case A_ENTRYOFFSET:
> > + rz = s->entry_offset;
> > + break;
> > + case A_ERR_CFG:
> > + rz = s->regs.err_cfg;
> > + break;
> > + case A_MDLCK:
> > + rz = s->regs.mdlck;
> > + break;
> > + case A_MDLCKH:
> > + rz = s->regs.mdlckh;
> > + break;
> > + case A_MDCFGLCK:
> > + rz = s->regs.mdcfglck;
> > + break;
> > + case A_ENTRYLCK:
> > + rz = s->regs.entrylck;
> > + break;
> > + case A_ERR_REQADDR:
> > + rz = s->regs.err_reqaddr & UINT32_MAX;
> > + break;
> > + case A_ERR_REQADDRH:
> > + rz = s->regs.err_reqaddr >> 32;
> > + break;
> > + case A_ERR_REQID:
> > + rz = s->regs.err_reqid;
> > + break;
> > + case A_ERR_REQINFO:
> > + rz = s->regs.err_reqinfo;
> > + break;
> > +
> > + default:
> > + if (addr >= A_MDCFG0 &&
> > + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> > + offset = addr - A_MDCFG0;
> > + idx = offset >> 2;
> > + if (idx == 0 && offset == 0) {
> > + rz = s->regs.mdcfg[idx];
> > + } else {
> > + /* Only MDCFG0 is implemented in rapid-k model */
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + }
> > + } else if (addr >= A_SRCMD_EN0 &&
> > + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> > + offset = addr - A_SRCMD_EN0;
> > + idx = offset >> 5;
> > + offset &= 0x1f;
> > +
> > + switch (offset) {
> > + case SRCMD_EN_OFFSET:
> > + rz = s->regs.srcmd_en[idx];
> > + break;
> > + case SRCMD_ENH_OFFSET:
> > + rz = s->regs.srcmd_enh[idx];
> > + break;
> > + default:
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + break;
> > + }
> > + } else if (addr >= s->entry_offset &&
> > + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET +
> > + 16 * (s->entry_num - 1)) {
> > + offset = addr - s->entry_offset;
> > + idx = offset >> 4;
> > + offset &= 0xf;
> > +
> > + switch (offset) {
> > + case ENTRY_ADDR_OFFSET:
> > + rz = s->regs.entry[idx].addr_reg;
> > + break;
> > + case ENTRY_ADDRH_OFFSET:
> > + rz = s->regs.entry[idx].addrh_reg;
> > + break;
> > + case ENTRY_CFG_OFFSET:
> > + rz = s->regs.entry[idx].cfg_reg;
> > + break;
> > + case ENTRY_USER_CFG_OFFSET:
> > + /* Does not support user customized permission */
> > + rz = 0;
> > + break;
> > + default:
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + break;
> > + }
> > + } else {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + }
> > + break;
> > + }
> > + trace_iopmp_read(addr, rz);
> > + return rz;
> > +}
> > +
> > +static void
> > +iopmp_write(void *opaque, hwaddr addr, uint64_t value, unsigned size)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + uint32_t offset, idx;
> > + uint32_t value32 = value;
> > +
> > + trace_iopmp_write(addr, value32);
> > +
> > + switch (addr) {
> > + case A_VERSION: /* RO */
> > + break;
> > + case A_IMP: /* RO */
> > + break;
> > + case A_HWCFG0:
> > + if (FIELD_EX32(value32, HWCFG0, PRIENT_PROG)) {
> > + /* W1C */
> > + s->prient_prog = 0;
> > + }
> > + if (FIELD_EX32(value32, HWCFG0, ENABLE)) {
> > + /* W1S */
> > + s->enable = 1;
> > + iopmp_iommu_notify(s);
> > + }
> > + break;
> > + case A_HWCFG1: /* RO */
> > + break;
> > + case A_HWCFG2:
> > + if (s->prient_prog) {
> > + s->prio_entry = FIELD_EX32(value32, HWCFG2, PRIO_ENTRY);
> > + }
> > + break;
> > + case A_ERR_CFG:
> > + if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) {
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, L,
> > + FIELD_EX32(value32, ERR_CFG, L));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IE,
> > + FIELD_EX32(value32, ERR_CFG, IE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IRE,
> > + FIELD_EX32(value32, ERR_CFG, IRE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RRE,
> > + FIELD_EX32(value32, ERR_CFG, RRE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IWE,
> > + FIELD_EX32(value32, ERR_CFG, IWE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RWE,
> > + FIELD_EX32(value32, ERR_CFG, RWE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IXE,
> > + FIELD_EX32(value32, ERR_CFG, IXE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RXE,
> > + FIELD_EX32(value32, ERR_CFG, RXE));
> > + }
> > + break;
> > + case A_MDLCK:
> > + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> > + s->regs.mdlck = value32;
> > + }
> > + break;
> > + case A_MDLCKH:
> > + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> > + s->regs.mdlckh = value32;
> > + }
> > + break;
> > + case A_MDCFGLCK:
> > + if (!FIELD_EX32(s->regs.mdcfglck, MDCFGLCK, L)) {
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F,
> > + FIELD_EX32(value32, MDCFGLCK, F));
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L,
> > + FIELD_EX32(value32, MDCFGLCK, L));
> > + }
> > + break;
> > + case A_ENTRYLCK:
> > + if (!(FIELD_EX32(s->regs.entrylck, ENTRYLCK, L))) {
> > + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, F,
> > + FIELD_EX32(value32, ENTRYLCK, F));
> > + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, L,
> > + FIELD_EX32(value32, ENTRYLCK, L));
> > + }
> > + case A_ERR_REQADDR: /* RO */
> > + break;
> > + case A_ERR_REQADDRH: /* RO */
> > + break;
> > + case A_ERR_REQID: /* RO */
> > + break;
> > + case A_ERR_REQINFO:
> > + if (FIELD_EX32(value32, ERR_REQINFO, V)) {
> > + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo,
> > + ERR_REQINFO, V, 0);
> > + qemu_set_irq(s->irq, 0);
> > + }
> > + break;
> > +
> > + default:
> > + if (addr >= A_MDCFG0 &&
> > + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> > + offset = addr - A_MDCFG0;
> > + idx = offset >> 2;
> > + /* RO in rapid-k model */
> > + if (idx > 0) {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + }
> > + } else if (addr >= A_SRCMD_EN0 &&
> > + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> > + offset = addr - A_SRCMD_EN0;
> > + idx = offset >> 5;
> > + offset &= 0x1f;
> > +
> > + if (offset % 4) {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + } else if (FIELD_EX32(s->regs.srcmd_en[idx], SRCMD_EN0, L)
> > + == 0) {
> > + switch (offset) {
> > + case SRCMD_EN_OFFSET:
> > + s->regs.srcmd_en[idx] =
> > + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, L,
> > + FIELD_EX32(value32, SRCMD_EN0, L));
> > +
> > + /* MD field is protected by mdlck */
> > + value32 = (value32 & ~s->regs.mdlck) |
> > + (s->regs.srcmd_en[idx] & s->regs.mdlck);
> > + s->regs.srcmd_en[idx] =
> > + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, MD,
> > + FIELD_EX32(value32, SRCMD_EN0, MD));
> > + break;
> > + case SRCMD_ENH_OFFSET:
> > + value32 = (value32 & ~s->regs.mdlckh) |
> > + (s->regs.srcmd_enh[idx] & s->regs.mdlckh);
> > + s->regs.srcmd_enh[idx] =
> > + FIELD_DP32(s->regs.srcmd_enh[idx], SRCMD_ENH0, MDH,
> > + value32);
> > + break;
> > + default:
> > + break;
> > + }
> > + }
> > + } else if (addr >= s->entry_offset &&
> > + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET
> > + + 16 * (s->entry_num - 1)) {
> > + offset = addr - s->entry_offset;
> > + idx = offset >> 4;
> > + offset &= 0xf;
> > +
> > + /* index < ENTRYLCK_F is protected */
> > + if (idx >= FIELD_EX32(s->regs.entrylck, ENTRYLCK, F)) {
> > + switch (offset) {
> > + case ENTRY_ADDR_OFFSET:
> > + s->regs.entry[idx].addr_reg = value32;
> > + break;
> > + case ENTRY_ADDRH_OFFSET:
> > + s->regs.entry[idx].addrh_reg = value32;
> > + break;
> > + case ENTRY_CFG_OFFSET:
> > + s->regs.entry[idx].cfg_reg = value32;
> > + break;
> > + case ENTRY_USER_CFG_OFFSET:
> > + /* Does not support user customized permission */
> > + break;
> > + default:
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + break;
> > + }
> > + iopmp_update_rule(s, idx);
> > + if (idx + 1 < s->entry_num &&
> > + FIELD_EX32(s->regs.entry[idx + 1].cfg_reg, ENTRY_CFG, A) ==
> > + IOPMP_AMATCH_TOR) {
> > + iopmp_update_rule(s, idx + 1);
> > + }
> > + }
> > + } else {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", __func__,
> > + (int)addr);
> > + }
> > + }
> > +}
> > +
> > +/* Match entry in memory domain */
> > +static int match_entry_md(IopmpState *s, int md_idx, hwaddr start_addr,
> > + hwaddr end_addr, int *entry_idx,
> > + int *prior_entry_in_tlb)
> > +{
> > + int entry_idx_s, entry_idx_e;
> > + int result = ENTRY_NO_HIT;
> > + int i = 0;
> > + hwaddr tlb_sa = start_addr & ~(TARGET_PAGE_SIZE - 1);
> > + hwaddr tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
> > +
> > + entry_idx_s = md_idx * s->regs.mdcfg[0];
> > + entry_idx_e = (md_idx + 1) * s->regs.mdcfg[0];
> > +
> > + if (entry_idx_s >= s->entry_num) {
> > + return result;
> > + }
> > + if (entry_idx_e > s->entry_num) {
> > + entry_idx_e = s->entry_num;
> > + }
> > + i = entry_idx_s;
> > + for (i = entry_idx_s; i < entry_idx_e; i++) {
> > + if (FIELD_EX32(s->regs.entry[i].cfg_reg, ENTRY_CFG, A) ==
> > + IOPMP_AMATCH_OFF) {
> > + continue;
> > + }
> > + if (start_addr >= s->entry_addr[i].sa &&
> > + start_addr <= s->entry_addr[i].ea) {
> > + /* Check end address */
> > + if (end_addr >= s->entry_addr[i].sa &&
> > + end_addr <= s->entry_addr[i].ea) {
> > + *entry_idx = i;
> > + return ENTRY_HIT;
> > + } else if (i >= s->prio_entry) {
> > + /* Continue for non-prio_entry */
> > + continue;
> > + } else {
> > + *entry_idx = i;
> > + return ENTRY_PAR_HIT;
> > + }
> > + } else if (end_addr >= s->entry_addr[i].sa &&
> > + end_addr <= s->entry_addr[i].ea) {
> > + /* Only end address matches the entry */
> > + if (i >= s->prio_entry) {
> > + continue;
> > + } else {
> > + *entry_idx = i;
> > + return ENTRY_PAR_HIT;
> > + }
> > + } else if (start_addr < s->entry_addr[i].sa &&
> > + end_addr > s->entry_addr[i].ea) {
> > + if (i >= s->prio_entry) {
> > + continue;
> > + } else {
> > + *entry_idx = i;
> > + return ENTRY_PAR_HIT;
> > + }
> > + }
> > + if (prior_entry_in_tlb != NULL) {
> > + if ((s->entry_addr[i].sa >= tlb_sa &&
> > + s->entry_addr[i].sa <= tlb_ea) ||
> > + (s->entry_addr[i].ea >= tlb_sa &&
> > + s->entry_addr[i].ea <= tlb_ea)) {
> > + /*
> > + * TLB should not use the cached result when the tlb contains
> > + * higher priority entry
> > + */
> > + *prior_entry_in_tlb = 1;
> > + }
> > + }
> > + }
> > + return result;
> > +}
> > +
> > +static int match_entry(IopmpState *s, int rrid, hwaddr start_addr,
> > + hwaddr end_addr, int *match_md_idx,
> > + int *match_entry_idx, int *prior_entry_in_tlb)
> > +{
> > + int cur_result = ENTRY_NO_HIT;
> > + int result = ENTRY_NO_HIT;
> > + /* Remove lock bit */
> > + uint64_t srcmd_en = ((uint64_t)s->regs.srcmd_en[rrid] |
> > + ((uint64_t)s->regs.srcmd_enh[rrid] << 32)) >> 1;
> > +
> > + for (int md_idx = 0; md_idx < s->md_num; md_idx++) {
> > + if (srcmd_en & (1ULL << md_idx)) {
> > + cur_result = match_entry_md(s, md_idx, start_addr, end_addr,
> > + match_entry_idx, prior_entry_in_tlb);
> > + if (cur_result == ENTRY_HIT || cur_result == ENTRY_PAR_HIT) {
> > + *match_md_idx = md_idx;
> > + return cur_result;
> > + }
> > + }
> > + }
> > + return result;
> > +}
> > +
> > +static void iopmp_error_reaction(IopmpState *s, uint32_t id, hwaddr start,
> > + uint32_t info)
> > +{
> > + if (!FIELD_EX32(s->regs.err_reqinfo, ERR_REQINFO, V)) {
> > + s->regs.err_reqinfo = info;
> > + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo, ERR_REQINFO, V,
> > + 1);
> > + s->regs.err_reqid = id;
> > + /* addr[LEN+2:2] */
> > + s->regs.err_reqaddr = start >> 2;
> > +
> > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_READ &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IRE)) {
> > + qemu_set_irq(s->irq, 1);
> > + }
> > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_WRITE &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IWE)) {
> > + qemu_set_irq(s->irq, 1);
> > + }
> > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_FETCH &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IXE)) {
> > + qemu_set_irq(s->irq, 1);
> > + }
> > + }
> > +}
> > +
> > +static IOMMUTLBEntry iopmp_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
> > + IOMMUAccessFlags flags, int iommu_idx)
> > +{
> > + int rrid = iommu_idx;
> > + IopmpState *s = IOPMP(container_of(iommu, IopmpState, iommu));
> > + hwaddr start_addr, end_addr;
> > + int entry_idx = -1;
> > + int md_idx = -1;
> > + int result;
> > + uint32_t error_info = 0;
> > + uint32_t error_id = 0;
> > + int prior_entry_in_tlb = 0;
> > + iopmp_permission iopmp_perm;
> > + IOMMUTLBEntry entry = {
> > + .target_as = &s->downstream_as,
> > + .iova = addr,
> > + .translated_addr = addr,
> > + .addr_mask = 0,
> > + .perm = IOMMU_NONE,
> > + };
> > +
> > + if (!s->enable) {
> > + /* Bypass IOPMP */
> > + entry.addr_mask = -1ULL,
> > + entry.perm = IOMMU_RW;
> > + return entry;
> > + }
> > +
> > + /* unknown RRID */
> > + if (rrid >= s->rrid_num) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_RRID);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, addr, error_info);
> > + entry.target_as = &s->blocked_rwx_as;
> > + entry.perm = IOMMU_RW;
> > + return entry;
> > + }
> > +
> > + if (s->transaction_state[rrid].supported == true) {
> > + start_addr = s->transaction_state[rrid].start_addr;
> > + end_addr = s->transaction_state[rrid].end_addr;
> > + } else {
> > + /* No transaction information, use the same address */
> > + start_addr = addr;
> > + end_addr = addr;
> > + }
> > +
> > + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> > + &prior_entry_in_tlb);
> > + if (result == ENTRY_HIT) {
> > + entry.addr_mask = s->entry_addr[entry_idx].ea -
> > + s->entry_addr[entry_idx].sa;
> > + if (prior_entry_in_tlb) {
> > + /* Make TLB repeat iommu translation on every access */
>
> I don't follow this, if we have a prior entry in the TLB cache we
> don't cache the accesses?
For the cached TLB result to be used, the highest-priority entry in the TLB must
occupy the entire TLB page. If a lower-priority entry fills the entire TLB page,
it is still necessary to check which entry the transaction hits on each access
to the TLB page.
>
> > + entry.addr_mask = 0;
> > + }
> > + iopmp_perm = s->regs.entry[entry_idx].cfg_reg & IOPMP_RWX;
> > + if (flags) {
> > + if ((iopmp_perm & flags) == 0) {
> > + /* Permission denied */
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_READ + flags - 1);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + entry.target_as = &s->blocked_rwx_as;
> > + entry.perm = IOMMU_RW;
> > + } else {
> > + entry.target_as = &s->downstream_as;
> > + entry.perm = iopmp_perm;
> > + }
> > + } else {
> > + /* CPU access with IOMMU_NONE flag */
> > + if (iopmp_perm & IOPMP_XO) {
> > + if ((iopmp_perm & IOPMP_RW) == IOPMP_RW) {
> > + entry.target_as = &s->downstream_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> > + entry.target_as = &s->blocked_w_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> > + entry.target_as = &s->blocked_r_as;
> > + } else {
> > + entry.target_as = &s->blocked_rw_as;
> > + }
> > + } else {
> > + if ((iopmp_perm & IOPMP_RW) == IOMMU_RW) {
> > + entry.target_as = &s->blocked_x_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> > + entry.target_as = &s->blocked_wx_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> > + entry.target_as = &s->blocked_rx_as;
> > + } else {
> > + entry.target_as = &s->blocked_rwx_as;
> > + }
> > + }
> > + entry.perm = IOMMU_RW;
> > + }
> > + } else {
> > + if (flags) {
> > + if (result == ENTRY_PAR_HIT) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_PARHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + } else {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_NOHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + }
> > + }
> > + /* CPU access with IOMMU_NONE flag no_hit or par_hit */
> > + entry.target_as = &s->blocked_rwx_as;
> > + entry.perm = IOMMU_RW;
> > + }
> > + return entry;
> > +}
> > +
> > +static const MemoryRegionOps iopmp_ops = {
> > + .read = iopmp_read,
> > + .write = iopmp_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 4, .max_access_size = 4}
> > +};
> > +
> > +static MemTxResult iopmp_permssion_write(void *opaque, hwaddr addr,
> > + uint64_t value, unsigned size,
> > + MemTxAttrs attrs)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + return address_space_write(&s->downstream_as, addr, attrs, &value, size);
> > +}
> > +
> > +static MemTxResult iopmp_permssion_read(void *opaque, hwaddr addr,
> > + uint64_t *pdata, unsigned size,
> > + MemTxAttrs attrs)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + return address_space_read(&s->downstream_as, addr, attrs, pdata, size);
> > +}
> > +
> > +static MemTxResult iopmp_handle_block(void *opaque, hwaddr addr,
> > + uint64_t *data, unsigned size,
> > + MemTxAttrs attrs,
> > + iopmp_access_type access_type) {
> > + IopmpState *s = IOPMP(opaque);
> > + int md_idx, entry_idx;
> > + uint32_t error_info = 0;
> > + uint32_t error_id = 0;
> > + int rrid = attrs.requester_id;
> > + int result;
> > + hwaddr start_addr, end_addr;
> > + start_addr = addr;
> > + end_addr = addr;
> > + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> > + NULL);
> > +
> > + if (result == ENTRY_HIT) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + access_type);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + } else if (result == ENTRY_PAR_HIT) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_PARHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE,
> > + access_type);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + } else {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_NOHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + }
> > +
> > + if (access_type == IOPMP_ACCESS_READ) {
> > +
> > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RRE)) {
> > + case RRE_ERROR:
> > + return MEMTX_ERROR;
> > + break;
> > + case RRE_SUCCESS_VALUE:
> > + *data = s->fabricated_v;
> > + return MEMTX_OK;
> > + break;
> > + default:
> > + break;
> > + }
> > + return MEMTX_OK;
> > + } else if (access_type == IOPMP_ACCESS_WRITE) {
> > +
> > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RWE)) {
> > + case RWE_ERROR:
> > + return MEMTX_ERROR;
> > + break;
> > + case RWE_SUCCESS:
> > + return MEMTX_OK;
> > + break;
> > + default:
> > + break;
> > + }
> > + return MEMTX_OK;
> > + } else {
> > +
> > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RXE)) {
> > + case RXE_ERROR:
> > + return MEMTX_ERROR;
> > + break;
> > + case RXE_SUCCESS_VALUE:
> > + *data = s->fabricated_v;
> > + return MEMTX_OK;
> > + break;
> > + default:
> > + break;
> > + }
> > + return MEMTX_OK;
> > + }
> > + return MEMTX_OK;
> > +}
> > +
> > +static MemTxResult iopmp_block_write(void *opaque, hwaddr addr, uint64_t value,
> > + unsigned size, MemTxAttrs attrs)
> > +{
> > + return iopmp_handle_block(opaque, addr, &value, size, attrs,
> > + IOPMP_ACCESS_WRITE);
> > +}
> > +
> > +static MemTxResult iopmp_block_read(void *opaque, hwaddr addr, uint64_t *pdata,
> > + unsigned size, MemTxAttrs attrs)
> > +{
> > + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> > + IOPMP_ACCESS_READ);
> > +}
> > +
> > +static MemTxResult iopmp_block_fetch(void *opaque, hwaddr addr, uint64_t *pdata,
> > + unsigned size, MemTxAttrs attrs)
> > +{
> > + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> > + IOPMP_ACCESS_FETCH);
> > +}
> > +
> > +static const MemoryRegionOps iopmp_block_rw_ops = {
> > + .fetch_with_attrs = iopmp_permssion_read,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_w_ops = {
> > + .fetch_with_attrs = iopmp_permssion_read,
> > + .read_with_attrs = iopmp_permssion_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_r_ops = {
> > + .fetch_with_attrs = iopmp_permssion_read,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_permssion_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_rwx_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_wx_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_permssion_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_rx_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_permssion_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_x_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_permssion_read,
> > + .write_with_attrs = iopmp_permssion_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static void iopmp_realize(DeviceState *dev, Error **errp)
> > +{
> > + Object *obj = OBJECT(dev);
> > + SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
> > + IopmpState *s = IOPMP(dev);
> > + uint64_t size;
> > +
> > + size = -1ULL;
> > + s->model = IOPMP_MODEL_RAPIDK;
>
> Should this be a property to allow other models in the future?
Yes, it will be refined in next revision.
>
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> > +
> > + s->prient_prog = s->default_prient_prog;
> > + s->rrid_num = MIN(s->rrid_num, IOPMP_MAX_RRID_NUM);
> > + s->md_num = MIN(s->md_num, IOPMP_MAX_MD_NUM);
> > + s->entry_num = s->md_num * s->k;
> > + s->prio_entry = MIN(s->prio_entry, s->entry_num);
> > +
> > + s->regs.mdcfg = g_malloc0(s->md_num * sizeof(uint32_t));
> > + s->regs.mdcfg[0] = s->k;
> > +
> > + s->regs.srcmd_en = g_malloc0(s->rrid_num * sizeof(uint32_t));
> > + s->regs.srcmd_enh = g_malloc0(s->rrid_num * sizeof(uint32_t));
> > + s->regs.entry = g_malloc0(s->entry_num * sizeof(iopmp_entry_t));
> > + s->entry_addr = g_malloc0(s->entry_num * sizeof(iopmp_addr_t));
> > + s->transaction_state = g_malloc0(s->rrid_num *
> > + sizeof(iopmp_transaction_state));
> > + qemu_mutex_init(&s->iopmp_transaction_mutex);
> > +
> > + memory_region_init_iommu(&s->iommu, sizeof(s->iommu),
> > + TYPE_IOPMP_IOMMU_MEMORY_REGION,
> > + obj, "riscv-iopmp-sysbus-iommu", UINT64_MAX);
> > + memory_region_init_io(&s->mmio, obj, &iopmp_ops,
> > + s, "iopmp-regs", 0x100000);
> > + sysbus_init_mmio(sbd, &s->mmio);
> > +
> > + memory_region_init_io(&s->blocked_rw, NULL, &iopmp_block_rw_ops,
> > + s, "iopmp-blocked-rw", size);
> > + memory_region_init_io(&s->blocked_w, NULL, &iopmp_block_w_ops,
> > + s, "iopmp-blocked-w", size);
> > + memory_region_init_io(&s->blocked_r, NULL, &iopmp_block_r_ops,
> > + s, "iopmp-blocked-r", size);
> > +
> > + memory_region_init_io(&s->blocked_rwx, NULL, &iopmp_block_rwx_ops,
> > + s, "iopmp-blocked-rwx", size);
> > + memory_region_init_io(&s->blocked_wx, NULL, &iopmp_block_wx_ops,
> > + s, "iopmp-blocked-wx", size);
> > + memory_region_init_io(&s->blocked_rx, NULL, &iopmp_block_rx_ops,
> > + s, "iopmp-blocked-rx", size);
> > + memory_region_init_io(&s->blocked_x, NULL, &iopmp_block_x_ops,
> > + s, "iopmp-blocked-x", size);
> > + address_space_init(&s->blocked_rw_as, &s->blocked_rw,
> > + "iopmp-blocked-rw-as");
> > + address_space_init(&s->blocked_w_as, &s->blocked_w,
> > + "iopmp-blocked-w-as");
> > + address_space_init(&s->blocked_r_as, &s->blocked_r,
> > + "iopmp-blocked-r-as");
> > +
> > + address_space_init(&s->blocked_rwx_as, &s->blocked_rwx,
> > + "iopmp-blocked-rwx-as");
> > + address_space_init(&s->blocked_wx_as, &s->blocked_wx,
> > + "iopmp-blocked-wx-as");
> > + address_space_init(&s->blocked_rx_as, &s->blocked_rx,
> > + "iopmp-blocked-rx-as");
> > + address_space_init(&s->blocked_x_as, &s->blocked_x,
> > + "iopmp-blocked-x-as");
> > +}
> > +
> > +static void iopmp_reset(DeviceState *dev)
> > +{
> > + IopmpState *s = IOPMP(dev);
> > +
> > + qemu_set_irq(s->irq, 0);
> > + memset(s->regs.srcmd_en, 0, s->rrid_num * sizeof(uint32_t));
> > + memset(s->regs.srcmd_enh, 0, s->rrid_num * sizeof(uint32_t));
> > + memset(s->entry_addr, 0, s->entry_num * sizeof(iopmp_addr_t));
> > +
> > + s->regs.mdlck = 0;
> > + s->regs.mdlckh = 0;
> > + s->regs.entrylck = 0;
> > + s->regs.mdstall = 0;
> > + s->regs.mdstallh = 0;
> > + s->regs.rridscp = 0;
> > + s->regs.err_cfg = 0;
> > + s->regs.err_reqaddr = 0;
> > + s->regs.err_reqid = 0;
> > + s->regs.err_reqinfo = 0;
> > +
> > + s->prient_prog = s->default_prient_prog;
> > + s->enable = 0;
> > +
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> > + s->regs.mdcfg[0] = s->k;
> > +}
> > +
> > +static int iopmp_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
> > +{
> > + return attrs.requester_id;
> > +}
> > +
> > +static void iopmp_iommu_memory_region_class_init(ObjectClass *klass, void *data)
> > +{
> > + IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
> > +
> > + imrc->translate = iopmp_translate;
> > + imrc->attrs_to_index = iopmp_attrs_to_index;
> > +}
> > +
> > +static Property iopmp_property[] = {
> > + DEFINE_PROP_BOOL("prient_prog", IopmpState, default_prient_prog, true),
> > + DEFINE_PROP_UINT32("k", IopmpState, k, 6),
> > + DEFINE_PROP_UINT32("prio_entry", IopmpState, prio_entry, 48),
> > + DEFINE_PROP_UINT32("rrid_num", IopmpState, rrid_num, 16),
> > + DEFINE_PROP_UINT32("md_num", IopmpState, md_num, 8),
> > + DEFINE_PROP_UINT32("entry_offset", IopmpState, entry_offset, 0x4000),
> > + DEFINE_PROP_UINT32("fabricated_v", IopmpState, fabricated_v, 0x0),
> > + DEFINE_PROP_END_OF_LIST(),
> > +};
> > +
> > +static void iopmp_class_init(ObjectClass *klass, void *data)
> > +{
> > + DeviceClass *dc = DEVICE_CLASS(klass);
> > + device_class_set_props(dc, iopmp_property);
> > + dc->realize = iopmp_realize;
> > + dc->reset = iopmp_reset;
> > +}
> > +
> > +static void iopmp_init(Object *obj)
> > +{
> > + IopmpState *s = IOPMP(obj);
> > + SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
> > +
> > + sysbus_init_irq(sbd, &s->irq);
> > +}
> > +
> > +static const TypeInfo iopmp_info = {
> > + .name = TYPE_IOPMP,
> > + .parent = TYPE_SYS_BUS_DEVICE,
> > + .instance_size = sizeof(IopmpState),
> > + .instance_init = iopmp_init,
> > + .class_init = iopmp_class_init,
> > +};
> > +
> > +static const TypeInfo
> > +iopmp_iommu_memory_region_info = {
> > + .name = TYPE_IOPMP_IOMMU_MEMORY_REGION,
> > + .parent = TYPE_IOMMU_MEMORY_REGION,
> > + .class_init = iopmp_iommu_memory_region_class_init,
> > +};
> > +
> > +static void
> > +iopmp_register_types(void)
> > +{
> > + type_register_static(&iopmp_info);
> > + type_register_static(&iopmp_iommu_memory_region_info);
> > +}
> > +
> > +type_init(iopmp_register_types);
> > diff --git a/hw/misc/trace-events b/hw/misc/trace-events
> > index 1be0717c0c..c148166d2d 100644
> > --- a/hw/misc/trace-events
> > +++ b/hw/misc/trace-events
> > @@ -362,3 +362,6 @@ aspeed_sli_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx
> > aspeed_sliio_write(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
> > aspeed_sliio_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
> >
> > +# riscv_iopmp.c
> > +iopmp_read(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> > +iopmp_write(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> > diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> > new file mode 100644
> > index 0000000000..b8fe479108
> > --- /dev/null
> > +++ b/include/hw/misc/riscv_iopmp.h
> > @@ -0,0 +1,168 @@
> > +/*
> > + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> > + *
> > + * Copyright (c) 2023-2024 Andes Tech. Corp.
> > + *
> > + * SPDX-License-Identifier: GPL-2.0-or-later
> > + *
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms and conditions of the GNU General Public License,
> > + * version 2 or later, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> > + * more details.
> > + *
> > + * You should have received a copy of the GNU General Public License along with
> > + * this program. If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#ifndef RISCV_IOPMP_H
> > +#define RISCV_IOPMP_H
> > +
> > +#include "hw/sysbus.h"
> > +#include "qemu/typedefs.h"
> > +#include "memory.h"
> > +#include "exec/hwaddr.h"
> > +
> > +#define TYPE_IOPMP "iopmp"
> > +#define IOPMP(obj) OBJECT_CHECK(IopmpState, (obj), TYPE_IOPMP)
> > +
> > +#define IOPMP_MAX_MD_NUM 63
> > +#define IOPMP_MAX_RRID_NUM 65535
> > +#define IOPMP_MAX_ENTRY_NUM 65535
> > +
> > +#define VENDER_VIRT 0
> > +#define SPECVER_0_9_1 91
> > +#define IMPID_0_9_1 91
> > +
> > +#define RRE_ERROR 0
> > +#define RRE_SUCCESS_VALUE 1
> > +
> > +#define RWE_ERROR 0
> > +#define RWE_SUCCESS 1
> > +
> > +#define RXE_ERROR 0
> > +#define RXE_SUCCESS_VALUE 1
> > +
> > +#define ERR_REQINFO_TTYPE_READ 1
> > +#define ERR_REQINFO_TTYPE_WRITE 2
> > +#define ERR_REQINFO_TTYPE_FETCH 3
> > +#define ERR_REQINFO_ETYPE_NOERROR 0
> > +#define ERR_REQINFO_ETYPE_READ 1
> > +#define ERR_REQINFO_ETYPE_WRITE 2
> > +#define ERR_REQINFO_ETYPE_FETCH 3
> > +#define ERR_REQINFO_ETYPE_PARHIT 4
> > +#define ERR_REQINFO_ETYPE_NOHIT 5
> > +#define ERR_REQINFO_ETYPE_RRID 6
> > +#define ERR_REQINFO_ETYPE_USER 7
> > +
> > +#define IOPMP_MODEL_FULL 0
> > +#define IOPMP_MODEL_RAPIDK 0x1
> > +#define IOPMP_MODEL_DYNAMICK 0x2
> > +#define IOPMP_MODEL_ISOLATION 0x3
> > +#define IOPMP_MODEL_COMPACTK 0x4
> > +
> > +#define ENTRY_NO_HIT 0
> > +#define ENTRY_PAR_HIT 1
> > +#define ENTRY_HIT 2
>
> Why not an enum?
>
Thank you for your suggestion. There will be enums in next version.
Thanks,
Ethan Chen
> Alistair
>
> > +
> > +/* The generic iopmp address space which downstream is system memory */
> > +extern AddressSpace iopmp_container_as;
> > +
> > +typedef enum {
> > + IOPMP_AMATCH_OFF, /* Null (off) */
> > + IOPMP_AMATCH_TOR, /* Top of Range */
> > + IOPMP_AMATCH_NA4, /* Naturally aligned four-byte region */
> > + IOPMP_AMATCH_NAPOT /* Naturally aligned power-of-two region */
> > +} iopmp_am_t;
> > +
> > +typedef enum {
> > + IOPMP_ACCESS_READ = 1,
> > + IOPMP_ACCESS_WRITE = 2,
> > + IOPMP_ACCESS_FETCH = 3
> > +} iopmp_access_type;
> > +
> > +typedef enum {
> > + IOPMP_NONE = 0,
> > + IOPMP_RO = 1,
> > + IOPMP_WO = 2,
> > + IOPMP_RW = 3,
> > + IOPMP_XO = 4,
> > + IOPMP_RX = 5,
> > + IOPMP_WX = 6,
> > + IOPMP_RWX = 7,
> > +} iopmp_permission;
> > +
> > +typedef struct {
> > + uint32_t addr_reg;
> > + uint32_t addrh_reg;
> > + uint32_t cfg_reg;
> > +} iopmp_entry_t;
> > +
> > +typedef struct {
> > + uint64_t sa;
> > + uint64_t ea;
> > +} iopmp_addr_t;
> > +
> > +typedef struct {
> > + uint32_t *srcmd_en;
> > + uint32_t *srcmd_enh;
> > + uint32_t *mdcfg;
> > + iopmp_entry_t *entry;
> > + uint32_t mdlck;
> > + uint32_t mdlckh;
> > + uint32_t entrylck;
> > + uint32_t mdcfglck;
> > + uint32_t mdstall;
> > + uint32_t mdstallh;
> > + uint32_t rridscp;
> > + uint32_t err_cfg;
> > + uint64_t err_reqaddr;
> > + uint32_t err_reqid;
> > + uint32_t err_reqinfo;
> > +} iopmp_regs;
> > +
> > +
> > +/* To detect partially hit */
> > +typedef struct iopmp_transaction_state {
> > + bool running;
> > + bool supported;
> > + hwaddr start_addr;
> > + hwaddr end_addr;
> > +} iopmp_transaction_state;
> > +
> > +typedef struct IopmpState {
> > + SysBusDevice parent_obj;
> > + iopmp_addr_t *entry_addr;
> > + MemoryRegion mmio;
> > + IOMMUMemoryRegion iommu;
> > + IOMMUMemoryRegion *next_iommu;
> > + iopmp_regs regs;
> > + MemoryRegion *downstream;
> > + MemoryRegion blocked_r, blocked_w, blocked_x, blocked_rw, blocked_rx,
> > + blocked_wx, blocked_rwx;
> > + MemoryRegion stall_io;
> > + uint32_t model;
> > + uint32_t k;
> > + bool prient_prog;
> > + bool default_prient_prog;
> > + iopmp_transaction_state *transaction_state;
> > + QemuMutex iopmp_transaction_mutex;
> > +
> > + AddressSpace downstream_as;
> > + AddressSpace blocked_r_as, blocked_w_as, blocked_x_as, blocked_rw_as,
> > + blocked_rx_as, blocked_wx_as, blocked_rwx_as;
> > + qemu_irq irq;
> > + bool enable;
> > +
> > + uint32_t prio_entry;
> > + uint32_t rrid_num;
> > + uint32_t md_num;
> > + uint32_t entry_num;
> > + uint32_t entry_offset;
> > + uint32_t fabricated_v;
> > +} IopmpState;
> > +
> > +#endif
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 6/8] hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support
2024-08-08 4:25 ` Alistair Francis
@ 2024-08-09 9:56 ` Ethan Chen via
0 siblings, 0 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-08-09 9:56 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Thu, Aug 08, 2024 at 02:25:04PM +1000, Alistair Francis wrote:
>
> On Mon, Jul 15, 2024 at 8:15 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> >
> > The iopmp_setup_cpu() function configures the RISCV CPU to support IOPMP and
> > specifies the CPU's RRID.
> >
> > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > ---
> > hw/misc/riscv_iopmp.c | 6 ++++++
> > include/hw/misc/riscv_iopmp.h | 1 +
> > 2 files changed, 7 insertions(+)
> >
> > diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> > index e62ac57437..374bf5c610 100644
> > --- a/hw/misc/riscv_iopmp.c
> > +++ b/hw/misc/riscv_iopmp.c
> > @@ -1211,5 +1211,11 @@ void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> > "iopmp-downstream-as");
> > }
> >
> > +void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid)
> > +{
> > + cpu->cfg.iopmp = true;
> > + cpu->cfg.iopmp_rrid = rrid;
> > +}
>
> This should just be a normal CPU property, which the machine can then
> set to true if required
I will add CPU properties for IOPMP config.
Thanks,
Ethan Chen
>
> Alistair
>
> > +
> >
> > type_init(iopmp_register_types);
> > diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> > index ebe9c4bc4a..7e7da56d10 100644
> > --- a/include/hw/misc/riscv_iopmp.h
> > +++ b/include/hw/misc/riscv_iopmp.h
> > @@ -167,5 +167,6 @@ typedef struct IopmpState {
> >
> > void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> > uint32_t mapentry_num);
> > +void iopmp_setup_cpu(RISCVCPU *cpu, uint32_t rrid);
> >
> > #endif
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device
2024-08-08 3:56 ` Alistair Francis
2024-08-09 9:42 ` Ethan Chen via
@ 2024-08-09 10:03 ` Ethan Chen via
1 sibling, 0 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-08-09 10:03 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Thu, Aug 08, 2024 at 01:56:35PM +1000, Alistair Francis wrote:
> [EXTERNAL MAIL]
>
> On Mon, Jul 15, 2024 at 7:58 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> >
> > Support basic functions of IOPMP specification v0.9.1 rapid-k model.
> > The specification url:
> > https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
> >
> > The IOPMP checks whether memory access from a device or CPU is valid.
> > This implementation uses an IOMMU to modify the address space accessed
> > by the device.
> >
> > For device access with IOMMUAccessFlags specifying read or write
> > (IOMMU_RO or IOMMU_WO), the IOPMP checks the permission in
> > iopmp_translate. If the access is valid, the target address space is
> > downstream_as. If the access is blocked, it will be redirected to
> > blocked_rwx_as.
> >
> > For CPU access with IOMMUAccessFlags not specifying read or write
> > (IOMMU_NONE), the IOPMP translates the access to the corresponding
> > address space based on the permission. If the access has full permission
> > (rwx), the target address space is downstream_as. If the access has
> > limited permission, the target address space is blocked_ followed by
> > the lacked permissions.
> >
> > The operation of a blocked region can trigger an IOPMP interrupt, a bus
> > error, or it can respond with success and fabricated data, depending on
> > the value of the IOPMP ERR_CFG register.
> >
> > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > ---
> > hw/misc/Kconfig | 3 +
> > hw/misc/meson.build | 1 +
> > hw/misc/riscv_iopmp.c | 1154 +++++++++++++++++++++++++++++++++
> > hw/misc/trace-events | 3 +
> > include/hw/misc/riscv_iopmp.h | 168 +++++
> > 5 files changed, 1329 insertions(+)
> > create mode 100644 hw/misc/riscv_iopmp.c
> > create mode 100644 include/hw/misc/riscv_iopmp.h
> >
> > diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
> > index 1e08785b83..427f0c702e 100644
> > --- a/hw/misc/Kconfig
> > +++ b/hw/misc/Kconfig
> > @@ -213,4 +213,7 @@ config IOSB
> > config XLNX_VERSAL_TRNG
> > bool
> >
> > +config RISCV_IOPMP
> > + bool
> > +
> > source macio/Kconfig
> > diff --git a/hw/misc/meson.build b/hw/misc/meson.build
> > index 2ca8717be2..d9006e1d81 100644
> > --- a/hw/misc/meson.build
> > +++ b/hw/misc/meson.build
> > @@ -34,6 +34,7 @@ system_ss.add(when: 'CONFIG_SIFIVE_E_PRCI', if_true: files('sifive_e_prci.c'))
> > system_ss.add(when: 'CONFIG_SIFIVE_E_AON', if_true: files('sifive_e_aon.c'))
> > system_ss.add(when: 'CONFIG_SIFIVE_U_OTP', if_true: files('sifive_u_otp.c'))
> > system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c'))
> > +specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp.c'))
> >
> > subdir('macio')
> >
> > diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> > new file mode 100644
> > index 0000000000..db43e3c73f
> > --- /dev/null
> > +++ b/hw/misc/riscv_iopmp.c
> > @@ -0,0 +1,1154 @@
> > +/*
> > + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> > + *
> > + * Copyright (c) 2023-2024 Andes Tech. Corp.
> > + *
> > + * SPDX-License-Identifier: GPL-2.0-or-later
> > + *
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms and conditions of the GNU General Public License,
> > + * version 2 or later, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> > + * more details.
> > + *
> > + * You should have received a copy of the GNU General Public License along with
> > + * this program. If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "qemu/log.h"
> > +#include "qapi/error.h"
> > +#include "trace.h"
> > +#include "exec/exec-all.h"
> > +#include "exec/address-spaces.h"
> > +#include "hw/qdev-properties.h"
> > +#include "hw/sysbus.h"
> > +#include "hw/misc/riscv_iopmp.h"
> > +#include "memory.h"
> > +#include "hw/irq.h"
> > +#include "hw/registerfields.h"
> > +#include "trace.h"
> > +
> > +#define TYPE_IOPMP_IOMMU_MEMORY_REGION "iopmp-iommu-memory-region"
> > +
> > +REG32(VERSION, 0x00)
> > + FIELD(VERSION, VENDOR, 0, 24)
> > + FIELD(VERSION, SPECVER , 24, 8)
> > +REG32(IMP, 0x04)
> > + FIELD(IMP, IMPID, 0, 32)
> > +REG32(HWCFG0, 0x08)
> > + FIELD(HWCFG0, MODEL, 0, 4)
> > + FIELD(HWCFG0, TOR_EN, 4, 1)
> > + FIELD(HWCFG0, SPS_EN, 5, 1)
> > + FIELD(HWCFG0, USER_CFG_EN, 6, 1)
> > + FIELD(HWCFG0, PRIENT_PROG, 7, 1)
> > + FIELD(HWCFG0, RRID_TRANSL_EN, 8, 1)
> > + FIELD(HWCFG0, RRID_TRANSL_PROG, 9, 1)
> > + FIELD(HWCFG0, CHK_X, 10, 1)
> > + FIELD(HWCFG0, NO_X, 11, 1)
> > + FIELD(HWCFG0, NO_W, 12, 1)
> > + FIELD(HWCFG0, STALL_EN, 13, 1)
> > + FIELD(HWCFG0, PEIS, 14, 1)
> > + FIELD(HWCFG0, PEES, 15, 1)
> > + FIELD(HWCFG0, MFR_EN, 16, 1)
> > + FIELD(HWCFG0, MD_NUM, 24, 7)
> > + FIELD(HWCFG0, ENABLE, 31, 1)
> > +REG32(HWCFG1, 0x0C)
> > + FIELD(HWCFG1, RRID_NUM, 0, 16)
> > + FIELD(HWCFG1, ENTRY_NUM, 16, 16)
> > +REG32(HWCFG2, 0x10)
> > + FIELD(HWCFG2, PRIO_ENTRY, 0, 16)
> > + FIELD(HWCFG2, RRID_TRANSL, 16, 16)
> > +REG32(ENTRYOFFSET, 0x14)
> > + FIELD(ENTRYOFFSET, OFFSET, 0, 32)
> > +REG32(MDSTALL, 0x30)
> > + FIELD(MDSTALL, EXEMPT, 0, 1)
> > + FIELD(MDSTALL, MD, 1, 31)
> > +REG32(MDSTALLH, 0x34)
> > + FIELD(MDSTALLH, MD, 0, 32)
> > +REG32(RRIDSCP, 0x38)
> > + FIELD(RRIDSCP, RRID, 0, 16)
> > + FIELD(RRIDSCP, OP, 30, 2)
> > +REG32(MDLCK, 0x40)
> > + FIELD(MDLCK, L, 0, 1)
> > + FIELD(MDLCK, MD, 1, 31)
> > +REG32(MDLCKH, 0x44)
> > + FIELD(MDLCKH, MDH, 0, 32)
> > +REG32(MDCFGLCK, 0x48)
> > + FIELD(MDCFGLCK, L, 0, 1)
> > + FIELD(MDCFGLCK, F, 1, 7)
> > +REG32(ENTRYLCK, 0x4C)
> > + FIELD(ENTRYLCK, L, 0, 1)
> > + FIELD(ENTRYLCK, F, 1, 16)
> > +REG32(ERR_CFG, 0x60)
> > + FIELD(ERR_CFG, L, 0, 1)
> > + FIELD(ERR_CFG, IE, 1, 1)
> > + FIELD(ERR_CFG, IRE, 2, 1)
> > + FIELD(ERR_CFG, IWE, 3, 1)
> > + FIELD(ERR_CFG, IXE, 4, 1)
> > + FIELD(ERR_CFG, RRE, 5, 1)
> > + FIELD(ERR_CFG, RWE, 6, 1)
> > + FIELD(ERR_CFG, RXE, 7, 1)
> > +REG32(ERR_REQINFO, 0x64)
> > + FIELD(ERR_REQINFO, V, 0, 1)
> > + FIELD(ERR_REQINFO, TTYPE, 1, 2)
> > + FIELD(ERR_REQINFO, ETYPE, 4, 3)
> > + FIELD(ERR_REQINFO, SVC, 7, 1)
> > +REG32(ERR_REQADDR, 0x68)
> > + FIELD(ERR_REQADDR, ADDR, 0, 32)
> > +REG32(ERR_REQADDRH, 0x6C)
> > + FIELD(ERR_REQADDRH, ADDRH, 0, 32)
> > +REG32(ERR_REQID, 0x70)
> > + FIELD(ERR_REQID, RRID, 0, 16)
> > + FIELD(ERR_REQID, EID, 16, 16)
> > +REG32(ERR_MFR, 0x74)
> > + FIELD(ERR_MFR, SVW, 0, 16)
> > + FIELD(ERR_MFR, SVI, 16, 12)
> > + FIELD(ERR_MFR, SVS, 31, 1)
> > +REG32(MDCFG0, 0x800)
> > + FIELD(MDCFG0, T, 0, 16)
> > +REG32(SRCMD_EN0, 0x1000)
> > + FIELD(SRCMD_EN0, L, 0, 1)
> > + FIELD(SRCMD_EN0, MD, 1, 31)
> > +REG32(SRCMD_ENH0, 0x1004)
> > + FIELD(SRCMD_ENH0, MDH, 0, 32)
> > +REG32(SRCMD_R0, 0x1008)
> > + FIELD(SRCMD_R0, MD, 1, 31)
> > +REG32(SRCMD_RH0, 0x100C)
> > + FIELD(SRCMD_RH0, MDH, 0, 32)
> > +REG32(SRCMD_W0, 0x1010)
> > + FIELD(SRCMD_W0, MD, 1, 31)
> > +REG32(SRCMD_WH0, 0x1014)
> > + FIELD(SRCMD_WH0, MDH, 0, 32)
> > +
> > +FIELD(ENTRY_ADDR, ADDR, 0, 32)
> > +FIELD(ENTRY_ADDRH, ADDRH, 0, 32)
> > +
> > +FIELD(ENTRY_CFG, R, 0, 1)
> > +FIELD(ENTRY_CFG, W, 1, 1)
> > +FIELD(ENTRY_CFG, X, 2, 1)
> > +FIELD(ENTRY_CFG, A, 3, 2)
> > +FIELD(ENTRY_CFG, SIRE, 5, 1)
> > +FIELD(ENTRY_CFG, SIWE, 6, 1)
> > +FIELD(ENTRY_CFG, SIXE, 7, 1)
> > +FIELD(ENTRY_CFG, SERE, 8, 1)
> > +FIELD(ENTRY_CFG, SEWE, 9, 1)
> > +FIELD(ENTRY_CFG, SEXE, 10, 1)
> > +
> > +FIELD(ENTRY_USER_CFG, IM, 0, 32)
> > +
> > +/* Offsets to SRCMD_EN(i) */
> > +#define SRCMD_EN_OFFSET 0x0
> > +#define SRCMD_ENH_OFFSET 0x4
> > +#define SRCMD_R_OFFSET 0x8
> > +#define SRCMD_RH_OFFSET 0xC
> > +#define SRCMD_W_OFFSET 0x10
> > +#define SRCMD_WH_OFFSET 0x14
> > +
> > +/* Offsets to ENTRY_ADDR(i) */
> > +#define ENTRY_ADDR_OFFSET 0x0
> > +#define ENTRY_ADDRH_OFFSET 0x4
> > +#define ENTRY_CFG_OFFSET 0x8
> > +#define ENTRY_USER_CFG_OFFSET 0xC
> > +
> > +/* Memmap for parallel IOPMPs */
> > +typedef struct iopmp_protection_memmap {
> > + MemMapEntry entry;
> > + IopmpState *iopmp_s;
> > + QLIST_ENTRY(iopmp_protection_memmap) list;
> > +} iopmp_protection_memmap;
> > +QLIST_HEAD(, iopmp_protection_memmap)
> > + iopmp_protection_memmaps = QLIST_HEAD_INITIALIZER(iopmp_protection_memmaps);
> > +
> > +static void iopmp_iommu_notify(IopmpState *s)
> > +{
> > + IOMMUTLBEvent event = {
> > + .entry = {
> > + .iova = 0,
> > + .translated_addr = 0,
> > + .addr_mask = -1ULL,
> > + .perm = IOMMU_NONE,
> > + },
> > + .type = IOMMU_NOTIFIER_UNMAP,
> > + };
> > +
> > + for (int i = 0; i < s->rrid_num; i++) {
> > + memory_region_notify_iommu(&s->iommu, i, event);
> > + }
> > +}
> > +
> > +static void iopmp_decode_napot(uint64_t a, uint64_t *sa,
> > + uint64_t *ea)
> > +{
> > + /*
> > + * aaaa...aaa0 8-byte NAPOT range
> > + * aaaa...aa01 16-byte NAPOT range
> > + * aaaa...a011 32-byte NAPOT range
> > + * ...
> > + * aa01...1111 2^XLEN-byte NAPOT range
> > + * a011...1111 2^(XLEN+1)-byte NAPOT range
> > + * 0111...1111 2^(XLEN+2)-byte NAPOT range
> > + * 1111...1111 Reserved
> > + */
> > +
> > + a = (a << 2) | 0x3;
> > + *sa = a & (a + 1);
> > + *ea = a | (a + 1);
> > +}
> > +
> > +static void iopmp_update_rule(IopmpState *s, uint32_t entry_index)
> > +{
> > + uint8_t this_cfg = s->regs.entry[entry_index].cfg_reg;
> > + uint64_t this_addr = s->regs.entry[entry_index].addr_reg |
> > + ((uint64_t)s->regs.entry[entry_index].addrh_reg << 32);
> > + uint64_t prev_addr = 0u;
> > + uint64_t sa = 0u;
> > + uint64_t ea = 0u;
> > +
> > + if (entry_index >= 1u) {
> > + prev_addr = s->regs.entry[entry_index - 1].addr_reg |
> > + ((uint64_t)s->regs.entry[entry_index - 1].addrh_reg << 32);
> > + }
> > +
> > + switch (FIELD_EX32(this_cfg, ENTRY_CFG, A)) {
> > + case IOPMP_AMATCH_OFF:
> > + sa = 0u;
> > + ea = -1;
> > + break;
> > +
> > + case IOPMP_AMATCH_TOR:
> > + sa = (prev_addr) << 2; /* shift up from [xx:0] to [xx+2:2] */
> > + ea = ((this_addr) << 2) - 1u;
> > + if (sa > ea) {
> > + sa = ea = 0u;
> > + }
> > + break;
> > +
> > + case IOPMP_AMATCH_NA4:
> > + sa = this_addr << 2; /* shift up from [xx:0] to [xx+2:2] */
> > + ea = (sa + 4u) - 1u;
> > + break;
> > +
> > + case IOPMP_AMATCH_NAPOT:
> > + iopmp_decode_napot(this_addr, &sa, &ea);
> > + break;
> > +
> > + default:
> > + sa = 0u;
> > + ea = 0u;
> > + break;
> > + }
> > +
> > + s->entry_addr[entry_index].sa = sa;
> > + s->entry_addr[entry_index].ea = ea;
> > + iopmp_iommu_notify(s);
> > +}
> > +
> > +static uint64_t iopmp_read(void *opaque, hwaddr addr, unsigned size)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + uint32_t rz = 0;
> > + uint32_t offset, idx;
> > +
> > + switch (addr) {
> > + case A_VERSION:
> > + rz = VENDER_VIRT << R_VERSION_VENDOR_SHIFT |
> > + SPECVER_0_9_1 << R_VERSION_SPECVER_SHIFT;
>
> It would be better to use the FIELD_DP32() macro instead of the manual shifts
It will be refined in next revision.
>
> > + break;
> > + case A_IMP:
> > + rz = IMPID_0_9_1;
> > + break;
> > + case A_HWCFG0:
> > + rz = s->model << R_HWCFG0_MODEL_SHIFT |
> > + 1 << R_HWCFG0_TOR_EN_SHIFT |
> > + 0 << R_HWCFG0_SPS_EN_SHIFT |
> > + 0 << R_HWCFG0_USER_CFG_EN_SHIFT |
> > + s->prient_prog << R_HWCFG0_PRIENT_PROG_SHIFT |
> > + 0 << R_HWCFG0_RRID_TRANSL_EN_SHIFT |
> > + 0 << R_HWCFG0_RRID_TRANSL_PROG_SHIFT |
> > + 1 << R_HWCFG0_CHK_X_SHIFT |
> > + 0 << R_HWCFG0_NO_X_SHIFT |
> > + 0 << R_HWCFG0_NO_W_SHIFT |
> > + 0 << R_HWCFG0_STALL_EN_SHIFT |
> > + 0 << R_HWCFG0_PEIS_SHIFT |
> > + 0 << R_HWCFG0_PEES_SHIFT |
> > + 0 << R_HWCFG0_MFR_EN_SHIFT |
> > + s->md_num << R_HWCFG0_MD_NUM_SHIFT |
> > + s->enable << R_HWCFG0_ENABLE_SHIFT ;
> > + break;
> > + case A_HWCFG1:
> > + rz = s->rrid_num << R_HWCFG1_RRID_NUM_SHIFT |
> > + s->entry_num << R_HWCFG1_ENTRY_NUM_SHIFT;
> > + break;
> > + case A_HWCFG2:
> > + rz = s->prio_entry << R_HWCFG2_PRIO_ENTRY_SHIFT;
> > + break;
> > + case A_ENTRYOFFSET:
> > + rz = s->entry_offset;
> > + break;
> > + case A_ERR_CFG:
> > + rz = s->regs.err_cfg;
> > + break;
> > + case A_MDLCK:
> > + rz = s->regs.mdlck;
> > + break;
> > + case A_MDLCKH:
> > + rz = s->regs.mdlckh;
> > + break;
> > + case A_MDCFGLCK:
> > + rz = s->regs.mdcfglck;
> > + break;
> > + case A_ENTRYLCK:
> > + rz = s->regs.entrylck;
> > + break;
> > + case A_ERR_REQADDR:
> > + rz = s->regs.err_reqaddr & UINT32_MAX;
> > + break;
> > + case A_ERR_REQADDRH:
> > + rz = s->regs.err_reqaddr >> 32;
> > + break;
> > + case A_ERR_REQID:
> > + rz = s->regs.err_reqid;
> > + break;
> > + case A_ERR_REQINFO:
> > + rz = s->regs.err_reqinfo;
> > + break;
> > +
> > + default:
> > + if (addr >= A_MDCFG0 &&
> > + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> > + offset = addr - A_MDCFG0;
> > + idx = offset >> 2;
> > + if (idx == 0 && offset == 0) {
> > + rz = s->regs.mdcfg[idx];
> > + } else {
> > + /* Only MDCFG0 is implemented in rapid-k model */
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + }
> > + } else if (addr >= A_SRCMD_EN0 &&
> > + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> > + offset = addr - A_SRCMD_EN0;
> > + idx = offset >> 5;
> > + offset &= 0x1f;
> > +
> > + switch (offset) {
> > + case SRCMD_EN_OFFSET:
> > + rz = s->regs.srcmd_en[idx];
> > + break;
> > + case SRCMD_ENH_OFFSET:
> > + rz = s->regs.srcmd_enh[idx];
> > + break;
> > + default:
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + break;
> > + }
> > + } else if (addr >= s->entry_offset &&
> > + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET +
> > + 16 * (s->entry_num - 1)) {
> > + offset = addr - s->entry_offset;
> > + idx = offset >> 4;
> > + offset &= 0xf;
> > +
> > + switch (offset) {
> > + case ENTRY_ADDR_OFFSET:
> > + rz = s->regs.entry[idx].addr_reg;
> > + break;
> > + case ENTRY_ADDRH_OFFSET:
> > + rz = s->regs.entry[idx].addrh_reg;
> > + break;
> > + case ENTRY_CFG_OFFSET:
> > + rz = s->regs.entry[idx].cfg_reg;
> > + break;
> > + case ENTRY_USER_CFG_OFFSET:
> > + /* Does not support user customized permission */
> > + rz = 0;
> > + break;
> > + default:
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + break;
> > + }
> > + } else {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + }
> > + break;
> > + }
> > + trace_iopmp_read(addr, rz);
> > + return rz;
> > +}
> > +
> > +static void
> > +iopmp_write(void *opaque, hwaddr addr, uint64_t value, unsigned size)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + uint32_t offset, idx;
> > + uint32_t value32 = value;
> > +
> > + trace_iopmp_write(addr, value32);
> > +
> > + switch (addr) {
> > + case A_VERSION: /* RO */
> > + break;
> > + case A_IMP: /* RO */
> > + break;
> > + case A_HWCFG0:
> > + if (FIELD_EX32(value32, HWCFG0, PRIENT_PROG)) {
> > + /* W1C */
> > + s->prient_prog = 0;
> > + }
> > + if (FIELD_EX32(value32, HWCFG0, ENABLE)) {
> > + /* W1S */
> > + s->enable = 1;
> > + iopmp_iommu_notify(s);
> > + }
> > + break;
> > + case A_HWCFG1: /* RO */
> > + break;
> > + case A_HWCFG2:
> > + if (s->prient_prog) {
> > + s->prio_entry = FIELD_EX32(value32, HWCFG2, PRIO_ENTRY);
> > + }
> > + break;
> > + case A_ERR_CFG:
> > + if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) {
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, L,
> > + FIELD_EX32(value32, ERR_CFG, L));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IE,
> > + FIELD_EX32(value32, ERR_CFG, IE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IRE,
> > + FIELD_EX32(value32, ERR_CFG, IRE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RRE,
> > + FIELD_EX32(value32, ERR_CFG, RRE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IWE,
> > + FIELD_EX32(value32, ERR_CFG, IWE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RWE,
> > + FIELD_EX32(value32, ERR_CFG, RWE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IXE,
> > + FIELD_EX32(value32, ERR_CFG, IXE));
> > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RXE,
> > + FIELD_EX32(value32, ERR_CFG, RXE));
> > + }
> > + break;
> > + case A_MDLCK:
> > + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> > + s->regs.mdlck = value32;
> > + }
> > + break;
> > + case A_MDLCKH:
> > + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> > + s->regs.mdlckh = value32;
> > + }
> > + break;
> > + case A_MDCFGLCK:
> > + if (!FIELD_EX32(s->regs.mdcfglck, MDCFGLCK, L)) {
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F,
> > + FIELD_EX32(value32, MDCFGLCK, F));
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L,
> > + FIELD_EX32(value32, MDCFGLCK, L));
> > + }
> > + break;
> > + case A_ENTRYLCK:
> > + if (!(FIELD_EX32(s->regs.entrylck, ENTRYLCK, L))) {
> > + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, F,
> > + FIELD_EX32(value32, ENTRYLCK, F));
> > + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, L,
> > + FIELD_EX32(value32, ENTRYLCK, L));
> > + }
> > + case A_ERR_REQADDR: /* RO */
> > + break;
> > + case A_ERR_REQADDRH: /* RO */
> > + break;
> > + case A_ERR_REQID: /* RO */
> > + break;
> > + case A_ERR_REQINFO:
> > + if (FIELD_EX32(value32, ERR_REQINFO, V)) {
> > + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo,
> > + ERR_REQINFO, V, 0);
> > + qemu_set_irq(s->irq, 0);
> > + }
> > + break;
> > +
> > + default:
> > + if (addr >= A_MDCFG0 &&
> > + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> > + offset = addr - A_MDCFG0;
> > + idx = offset >> 2;
> > + /* RO in rapid-k model */
> > + if (idx > 0) {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + }
> > + } else if (addr >= A_SRCMD_EN0 &&
> > + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> > + offset = addr - A_SRCMD_EN0;
> > + idx = offset >> 5;
> > + offset &= 0x1f;
> > +
> > + if (offset % 4) {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + } else if (FIELD_EX32(s->regs.srcmd_en[idx], SRCMD_EN0, L)
> > + == 0) {
> > + switch (offset) {
> > + case SRCMD_EN_OFFSET:
> > + s->regs.srcmd_en[idx] =
> > + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, L,
> > + FIELD_EX32(value32, SRCMD_EN0, L));
> > +
> > + /* MD field is protected by mdlck */
> > + value32 = (value32 & ~s->regs.mdlck) |
> > + (s->regs.srcmd_en[idx] & s->regs.mdlck);
> > + s->regs.srcmd_en[idx] =
> > + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, MD,
> > + FIELD_EX32(value32, SRCMD_EN0, MD));
> > + break;
> > + case SRCMD_ENH_OFFSET:
> > + value32 = (value32 & ~s->regs.mdlckh) |
> > + (s->regs.srcmd_enh[idx] & s->regs.mdlckh);
> > + s->regs.srcmd_enh[idx] =
> > + FIELD_DP32(s->regs.srcmd_enh[idx], SRCMD_ENH0, MDH,
> > + value32);
> > + break;
> > + default:
> > + break;
> > + }
> > + }
> > + } else if (addr >= s->entry_offset &&
> > + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET
> > + + 16 * (s->entry_num - 1)) {
> > + offset = addr - s->entry_offset;
> > + idx = offset >> 4;
> > + offset &= 0xf;
> > +
> > + /* index < ENTRYLCK_F is protected */
> > + if (idx >= FIELD_EX32(s->regs.entrylck, ENTRYLCK, F)) {
> > + switch (offset) {
> > + case ENTRY_ADDR_OFFSET:
> > + s->regs.entry[idx].addr_reg = value32;
> > + break;
> > + case ENTRY_ADDRH_OFFSET:
> > + s->regs.entry[idx].addrh_reg = value32;
> > + break;
> > + case ENTRY_CFG_OFFSET:
> > + s->regs.entry[idx].cfg_reg = value32;
> > + break;
> > + case ENTRY_USER_CFG_OFFSET:
> > + /* Does not support user customized permission */
> > + break;
> > + default:
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > + __func__, (int)addr);
> > + break;
> > + }
> > + iopmp_update_rule(s, idx);
> > + if (idx + 1 < s->entry_num &&
> > + FIELD_EX32(s->regs.entry[idx + 1].cfg_reg, ENTRY_CFG, A) ==
> > + IOPMP_AMATCH_TOR) {
> > + iopmp_update_rule(s, idx + 1);
> > + }
> > + }
> > + } else {
> > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", __func__,
> > + (int)addr);
> > + }
> > + }
> > +}
> > +
> > +/* Match entry in memory domain */
> > +static int match_entry_md(IopmpState *s, int md_idx, hwaddr start_addr,
> > + hwaddr end_addr, int *entry_idx,
> > + int *prior_entry_in_tlb)
> > +{
> > + int entry_idx_s, entry_idx_e;
> > + int result = ENTRY_NO_HIT;
> > + int i = 0;
> > + hwaddr tlb_sa = start_addr & ~(TARGET_PAGE_SIZE - 1);
> > + hwaddr tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
> > +
> > + entry_idx_s = md_idx * s->regs.mdcfg[0];
> > + entry_idx_e = (md_idx + 1) * s->regs.mdcfg[0];
> > +
> > + if (entry_idx_s >= s->entry_num) {
> > + return result;
> > + }
> > + if (entry_idx_e > s->entry_num) {
> > + entry_idx_e = s->entry_num;
> > + }
> > + i = entry_idx_s;
> > + for (i = entry_idx_s; i < entry_idx_e; i++) {
> > + if (FIELD_EX32(s->regs.entry[i].cfg_reg, ENTRY_CFG, A) ==
> > + IOPMP_AMATCH_OFF) {
> > + continue;
> > + }
> > + if (start_addr >= s->entry_addr[i].sa &&
> > + start_addr <= s->entry_addr[i].ea) {
> > + /* Check end address */
> > + if (end_addr >= s->entry_addr[i].sa &&
> > + end_addr <= s->entry_addr[i].ea) {
> > + *entry_idx = i;
> > + return ENTRY_HIT;
> > + } else if (i >= s->prio_entry) {
> > + /* Continue for non-prio_entry */
> > + continue;
> > + } else {
> > + *entry_idx = i;
> > + return ENTRY_PAR_HIT;
> > + }
> > + } else if (end_addr >= s->entry_addr[i].sa &&
> > + end_addr <= s->entry_addr[i].ea) {
> > + /* Only end address matches the entry */
> > + if (i >= s->prio_entry) {
> > + continue;
> > + } else {
> > + *entry_idx = i;
> > + return ENTRY_PAR_HIT;
> > + }
> > + } else if (start_addr < s->entry_addr[i].sa &&
> > + end_addr > s->entry_addr[i].ea) {
> > + if (i >= s->prio_entry) {
> > + continue;
> > + } else {
> > + *entry_idx = i;
> > + return ENTRY_PAR_HIT;
> > + }
> > + }
> > + if (prior_entry_in_tlb != NULL) {
> > + if ((s->entry_addr[i].sa >= tlb_sa &&
> > + s->entry_addr[i].sa <= tlb_ea) ||
> > + (s->entry_addr[i].ea >= tlb_sa &&
> > + s->entry_addr[i].ea <= tlb_ea)) {
> > + /*
> > + * TLB should not use the cached result when the tlb contains
> > + * higher priority entry
> > + */
> > + *prior_entry_in_tlb = 1;
> > + }
> > + }
> > + }
> > + return result;
> > +}
> > +
> > +static int match_entry(IopmpState *s, int rrid, hwaddr start_addr,
> > + hwaddr end_addr, int *match_md_idx,
> > + int *match_entry_idx, int *prior_entry_in_tlb)
> > +{
> > + int cur_result = ENTRY_NO_HIT;
> > + int result = ENTRY_NO_HIT;
> > + /* Remove lock bit */
> > + uint64_t srcmd_en = ((uint64_t)s->regs.srcmd_en[rrid] |
> > + ((uint64_t)s->regs.srcmd_enh[rrid] << 32)) >> 1;
> > +
> > + for (int md_idx = 0; md_idx < s->md_num; md_idx++) {
> > + if (srcmd_en & (1ULL << md_idx)) {
> > + cur_result = match_entry_md(s, md_idx, start_addr, end_addr,
> > + match_entry_idx, prior_entry_in_tlb);
> > + if (cur_result == ENTRY_HIT || cur_result == ENTRY_PAR_HIT) {
> > + *match_md_idx = md_idx;
> > + return cur_result;
> > + }
> > + }
> > + }
> > + return result;
> > +}
> > +
> > +static void iopmp_error_reaction(IopmpState *s, uint32_t id, hwaddr start,
> > + uint32_t info)
> > +{
> > + if (!FIELD_EX32(s->regs.err_reqinfo, ERR_REQINFO, V)) {
> > + s->regs.err_reqinfo = info;
> > + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo, ERR_REQINFO, V,
> > + 1);
> > + s->regs.err_reqid = id;
> > + /* addr[LEN+2:2] */
> > + s->regs.err_reqaddr = start >> 2;
> > +
> > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_READ &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IRE)) {
> > + qemu_set_irq(s->irq, 1);
> > + }
> > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_WRITE &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IWE)) {
> > + qemu_set_irq(s->irq, 1);
> > + }
> > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_FETCH &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IXE)) {
> > + qemu_set_irq(s->irq, 1);
> > + }
> > + }
> > +}
> > +
> > +static IOMMUTLBEntry iopmp_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
> > + IOMMUAccessFlags flags, int iommu_idx)
> > +{
> > + int rrid = iommu_idx;
> > + IopmpState *s = IOPMP(container_of(iommu, IopmpState, iommu));
> > + hwaddr start_addr, end_addr;
> > + int entry_idx = -1;
> > + int md_idx = -1;
> > + int result;
> > + uint32_t error_info = 0;
> > + uint32_t error_id = 0;
> > + int prior_entry_in_tlb = 0;
> > + iopmp_permission iopmp_perm;
> > + IOMMUTLBEntry entry = {
> > + .target_as = &s->downstream_as,
> > + .iova = addr,
> > + .translated_addr = addr,
> > + .addr_mask = 0,
> > + .perm = IOMMU_NONE,
> > + };
> > +
> > + if (!s->enable) {
> > + /* Bypass IOPMP */
> > + entry.addr_mask = -1ULL,
> > + entry.perm = IOMMU_RW;
> > + return entry;
> > + }
> > +
> > + /* unknown RRID */
> > + if (rrid >= s->rrid_num) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_RRID);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, addr, error_info);
> > + entry.target_as = &s->blocked_rwx_as;
> > + entry.perm = IOMMU_RW;
> > + return entry;
> > + }
> > +
> > + if (s->transaction_state[rrid].supported == true) {
> > + start_addr = s->transaction_state[rrid].start_addr;
> > + end_addr = s->transaction_state[rrid].end_addr;
> > + } else {
> > + /* No transaction information, use the same address */
> > + start_addr = addr;
> > + end_addr = addr;
> > + }
> > +
> > + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> > + &prior_entry_in_tlb);
> > + if (result == ENTRY_HIT) {
> > + entry.addr_mask = s->entry_addr[entry_idx].ea -
> > + s->entry_addr[entry_idx].sa;
> > + if (prior_entry_in_tlb) {
> > + /* Make TLB repeat iommu translation on every access */
>
> I don't follow this, if we have a prior entry in the TLB cache we
> don't cache the accesses?
For the cached TLB result to be used, the highest-priority entry in the TLB must
occupy the entire TLB page. If a lower-priority entry fills the entire TLB page,
it is still necessary to check which entry the transaction hits on each access
to the TLB page.
>
> > + entry.addr_mask = 0;
> > + }
> > + iopmp_perm = s->regs.entry[entry_idx].cfg_reg & IOPMP_RWX;
> > + if (flags) {
> > + if ((iopmp_perm & flags) == 0) {
> > + /* Permission denied */
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_READ + flags - 1);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + entry.target_as = &s->blocked_rwx_as;
> > + entry.perm = IOMMU_RW;
> > + } else {
> > + entry.target_as = &s->downstream_as;
> > + entry.perm = iopmp_perm;
> > + }
> > + } else {
> > + /* CPU access with IOMMU_NONE flag */
> > + if (iopmp_perm & IOPMP_XO) {
> > + if ((iopmp_perm & IOPMP_RW) == IOPMP_RW) {
> > + entry.target_as = &s->downstream_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> > + entry.target_as = &s->blocked_w_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> > + entry.target_as = &s->blocked_r_as;
> > + } else {
> > + entry.target_as = &s->blocked_rw_as;
> > + }
> > + } else {
> > + if ((iopmp_perm & IOPMP_RW) == IOMMU_RW) {
> > + entry.target_as = &s->blocked_x_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> > + entry.target_as = &s->blocked_wx_as;
> > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> > + entry.target_as = &s->blocked_rx_as;
> > + } else {
> > + entry.target_as = &s->blocked_rwx_as;
> > + }
> > + }
> > + entry.perm = IOMMU_RW;
> > + }
> > + } else {
> > + if (flags) {
> > + if (result == ENTRY_PAR_HIT) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_PARHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + } else {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_NOHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + }
> > + }
> > + /* CPU access with IOMMU_NONE flag no_hit or par_hit */
> > + entry.target_as = &s->blocked_rwx_as;
> > + entry.perm = IOMMU_RW;
> > + }
> > + return entry;
> > +}
> > +
> > +static const MemoryRegionOps iopmp_ops = {
> > + .read = iopmp_read,
> > + .write = iopmp_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 4, .max_access_size = 4}
> > +};
> > +
> > +static MemTxResult iopmp_permssion_write(void *opaque, hwaddr addr,
> > + uint64_t value, unsigned size,
> > + MemTxAttrs attrs)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + return address_space_write(&s->downstream_as, addr, attrs, &value, size);
> > +}
> > +
> > +static MemTxResult iopmp_permssion_read(void *opaque, hwaddr addr,
> > + uint64_t *pdata, unsigned size,
> > + MemTxAttrs attrs)
> > +{
> > + IopmpState *s = IOPMP(opaque);
> > + return address_space_read(&s->downstream_as, addr, attrs, pdata, size);
> > +}
> > +
> > +static MemTxResult iopmp_handle_block(void *opaque, hwaddr addr,
> > + uint64_t *data, unsigned size,
> > + MemTxAttrs attrs,
> > + iopmp_access_type access_type) {
> > + IopmpState *s = IOPMP(opaque);
> > + int md_idx, entry_idx;
> > + uint32_t error_info = 0;
> > + uint32_t error_id = 0;
> > + int rrid = attrs.requester_id;
> > + int result;
> > + hwaddr start_addr, end_addr;
> > + start_addr = addr;
> > + end_addr = addr;
> > + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> > + NULL);
> > +
> > + if (result == ENTRY_HIT) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + access_type);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + } else if (result == ENTRY_PAR_HIT) {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_PARHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE,
> > + access_type);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + } else {
> > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > + ERR_REQINFO_ETYPE_NOHIT);
> > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > + }
> > +
> > + if (access_type == IOPMP_ACCESS_READ) {
> > +
> > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RRE)) {
> > + case RRE_ERROR:
> > + return MEMTX_ERROR;
> > + break;
> > + case RRE_SUCCESS_VALUE:
> > + *data = s->fabricated_v;
> > + return MEMTX_OK;
> > + break;
> > + default:
> > + break;
> > + }
> > + return MEMTX_OK;
> > + } else if (access_type == IOPMP_ACCESS_WRITE) {
> > +
> > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RWE)) {
> > + case RWE_ERROR:
> > + return MEMTX_ERROR;
> > + break;
> > + case RWE_SUCCESS:
> > + return MEMTX_OK;
> > + break;
> > + default:
> > + break;
> > + }
> > + return MEMTX_OK;
> > + } else {
> > +
> > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RXE)) {
> > + case RXE_ERROR:
> > + return MEMTX_ERROR;
> > + break;
> > + case RXE_SUCCESS_VALUE:
> > + *data = s->fabricated_v;
> > + return MEMTX_OK;
> > + break;
> > + default:
> > + break;
> > + }
> > + return MEMTX_OK;
> > + }
> > + return MEMTX_OK;
> > +}
> > +
> > +static MemTxResult iopmp_block_write(void *opaque, hwaddr addr, uint64_t value,
> > + unsigned size, MemTxAttrs attrs)
> > +{
> > + return iopmp_handle_block(opaque, addr, &value, size, attrs,
> > + IOPMP_ACCESS_WRITE);
> > +}
> > +
> > +static MemTxResult iopmp_block_read(void *opaque, hwaddr addr, uint64_t *pdata,
> > + unsigned size, MemTxAttrs attrs)
> > +{
> > + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> > + IOPMP_ACCESS_READ);
> > +}
> > +
> > +static MemTxResult iopmp_block_fetch(void *opaque, hwaddr addr, uint64_t *pdata,
> > + unsigned size, MemTxAttrs attrs)
> > +{
> > + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> > + IOPMP_ACCESS_FETCH);
> > +}
> > +
> > +static const MemoryRegionOps iopmp_block_rw_ops = {
> > + .fetch_with_attrs = iopmp_permssion_read,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_w_ops = {
> > + .fetch_with_attrs = iopmp_permssion_read,
> > + .read_with_attrs = iopmp_permssion_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_r_ops = {
> > + .fetch_with_attrs = iopmp_permssion_read,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_permssion_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_rwx_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_wx_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_permssion_read,
> > + .write_with_attrs = iopmp_block_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_rx_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_block_read,
> > + .write_with_attrs = iopmp_permssion_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static const MemoryRegionOps iopmp_block_x_ops = {
> > + .fetch_with_attrs = iopmp_block_fetch,
> > + .read_with_attrs = iopmp_permssion_read,
> > + .write_with_attrs = iopmp_permssion_write,
> > + .endianness = DEVICE_NATIVE_ENDIAN,
> > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > +};
> > +
> > +static void iopmp_realize(DeviceState *dev, Error **errp)
> > +{
> > + Object *obj = OBJECT(dev);
> > + SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
> > + IopmpState *s = IOPMP(dev);
> > + uint64_t size;
> > +
> > + size = -1ULL;
> > + s->model = IOPMP_MODEL_RAPIDK;
>
> Should this be a property to allow other models in the future?
Sure. It will be added in next revision.
>
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> > +
> > + s->prient_prog = s->default_prient_prog;
> > + s->rrid_num = MIN(s->rrid_num, IOPMP_MAX_RRID_NUM);
> > + s->md_num = MIN(s->md_num, IOPMP_MAX_MD_NUM);
> > + s->entry_num = s->md_num * s->k;
> > + s->prio_entry = MIN(s->prio_entry, s->entry_num);
> > +
> > + s->regs.mdcfg = g_malloc0(s->md_num * sizeof(uint32_t));
> > + s->regs.mdcfg[0] = s->k;
> > +
> > + s->regs.srcmd_en = g_malloc0(s->rrid_num * sizeof(uint32_t));
> > + s->regs.srcmd_enh = g_malloc0(s->rrid_num * sizeof(uint32_t));
> > + s->regs.entry = g_malloc0(s->entry_num * sizeof(iopmp_entry_t));
> > + s->entry_addr = g_malloc0(s->entry_num * sizeof(iopmp_addr_t));
> > + s->transaction_state = g_malloc0(s->rrid_num *
> > + sizeof(iopmp_transaction_state));
> > + qemu_mutex_init(&s->iopmp_transaction_mutex);
> > +
> > + memory_region_init_iommu(&s->iommu, sizeof(s->iommu),
> > + TYPE_IOPMP_IOMMU_MEMORY_REGION,
> > + obj, "riscv-iopmp-sysbus-iommu", UINT64_MAX);
> > + memory_region_init_io(&s->mmio, obj, &iopmp_ops,
> > + s, "iopmp-regs", 0x100000);
> > + sysbus_init_mmio(sbd, &s->mmio);
> > +
> > + memory_region_init_io(&s->blocked_rw, NULL, &iopmp_block_rw_ops,
> > + s, "iopmp-blocked-rw", size);
> > + memory_region_init_io(&s->blocked_w, NULL, &iopmp_block_w_ops,
> > + s, "iopmp-blocked-w", size);
> > + memory_region_init_io(&s->blocked_r, NULL, &iopmp_block_r_ops,
> > + s, "iopmp-blocked-r", size);
> > +
> > + memory_region_init_io(&s->blocked_rwx, NULL, &iopmp_block_rwx_ops,
> > + s, "iopmp-blocked-rwx", size);
> > + memory_region_init_io(&s->blocked_wx, NULL, &iopmp_block_wx_ops,
> > + s, "iopmp-blocked-wx", size);
> > + memory_region_init_io(&s->blocked_rx, NULL, &iopmp_block_rx_ops,
> > + s, "iopmp-blocked-rx", size);
> > + memory_region_init_io(&s->blocked_x, NULL, &iopmp_block_x_ops,
> > + s, "iopmp-blocked-x", size);
> > + address_space_init(&s->blocked_rw_as, &s->blocked_rw,
> > + "iopmp-blocked-rw-as");
> > + address_space_init(&s->blocked_w_as, &s->blocked_w,
> > + "iopmp-blocked-w-as");
> > + address_space_init(&s->blocked_r_as, &s->blocked_r,
> > + "iopmp-blocked-r-as");
> > +
> > + address_space_init(&s->blocked_rwx_as, &s->blocked_rwx,
> > + "iopmp-blocked-rwx-as");
> > + address_space_init(&s->blocked_wx_as, &s->blocked_wx,
> > + "iopmp-blocked-wx-as");
> > + address_space_init(&s->blocked_rx_as, &s->blocked_rx,
> > + "iopmp-blocked-rx-as");
> > + address_space_init(&s->blocked_x_as, &s->blocked_x,
> > + "iopmp-blocked-x-as");
> > +}
> > +
> > +static void iopmp_reset(DeviceState *dev)
> > +{
> > + IopmpState *s = IOPMP(dev);
> > +
> > + qemu_set_irq(s->irq, 0);
> > + memset(s->regs.srcmd_en, 0, s->rrid_num * sizeof(uint32_t));
> > + memset(s->regs.srcmd_enh, 0, s->rrid_num * sizeof(uint32_t));
> > + memset(s->entry_addr, 0, s->entry_num * sizeof(iopmp_addr_t));
> > +
> > + s->regs.mdlck = 0;
> > + s->regs.mdlckh = 0;
> > + s->regs.entrylck = 0;
> > + s->regs.mdstall = 0;
> > + s->regs.mdstallh = 0;
> > + s->regs.rridscp = 0;
> > + s->regs.err_cfg = 0;
> > + s->regs.err_reqaddr = 0;
> > + s->regs.err_reqid = 0;
> > + s->regs.err_reqinfo = 0;
> > +
> > + s->prient_prog = s->default_prient_prog;
> > + s->enable = 0;
> > +
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> > + s->regs.mdcfg[0] = s->k;
> > +}
> > +
> > +static int iopmp_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
> > +{
> > + return attrs.requester_id;
> > +}
> > +
> > +static void iopmp_iommu_memory_region_class_init(ObjectClass *klass, void *data)
> > +{
> > + IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
> > +
> > + imrc->translate = iopmp_translate;
> > + imrc->attrs_to_index = iopmp_attrs_to_index;
> > +}
> > +
> > +static Property iopmp_property[] = {
> > + DEFINE_PROP_BOOL("prient_prog", IopmpState, default_prient_prog, true),
> > + DEFINE_PROP_UINT32("k", IopmpState, k, 6),
> > + DEFINE_PROP_UINT32("prio_entry", IopmpState, prio_entry, 48),
> > + DEFINE_PROP_UINT32("rrid_num", IopmpState, rrid_num, 16),
> > + DEFINE_PROP_UINT32("md_num", IopmpState, md_num, 8),
> > + DEFINE_PROP_UINT32("entry_offset", IopmpState, entry_offset, 0x4000),
> > + DEFINE_PROP_UINT32("fabricated_v", IopmpState, fabricated_v, 0x0),
> > + DEFINE_PROP_END_OF_LIST(),
> > +};
> > +
> > +static void iopmp_class_init(ObjectClass *klass, void *data)
> > +{
> > + DeviceClass *dc = DEVICE_CLASS(klass);
> > + device_class_set_props(dc, iopmp_property);
> > + dc->realize = iopmp_realize;
> > + dc->reset = iopmp_reset;
> > +}
> > +
> > +static void iopmp_init(Object *obj)
> > +{
> > + IopmpState *s = IOPMP(obj);
> > + SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
> > +
> > + sysbus_init_irq(sbd, &s->irq);
> > +}
> > +
> > +static const TypeInfo iopmp_info = {
> > + .name = TYPE_IOPMP,
> > + .parent = TYPE_SYS_BUS_DEVICE,
> > + .instance_size = sizeof(IopmpState),
> > + .instance_init = iopmp_init,
> > + .class_init = iopmp_class_init,
> > +};
> > +
> > +static const TypeInfo
> > +iopmp_iommu_memory_region_info = {
> > + .name = TYPE_IOPMP_IOMMU_MEMORY_REGION,
> > + .parent = TYPE_IOMMU_MEMORY_REGION,
> > + .class_init = iopmp_iommu_memory_region_class_init,
> > +};
> > +
> > +static void
> > +iopmp_register_types(void)
> > +{
> > + type_register_static(&iopmp_info);
> > + type_register_static(&iopmp_iommu_memory_region_info);
> > +}
> > +
> > +type_init(iopmp_register_types);
> > diff --git a/hw/misc/trace-events b/hw/misc/trace-events
> > index 1be0717c0c..c148166d2d 100644
> > --- a/hw/misc/trace-events
> > +++ b/hw/misc/trace-events
> > @@ -362,3 +362,6 @@ aspeed_sli_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx
> > aspeed_sliio_write(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
> > aspeed_sliio_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
> >
> > +# riscv_iopmp.c
> > +iopmp_read(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> > +iopmp_write(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> > diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> > new file mode 100644
> > index 0000000000..b8fe479108
> > --- /dev/null
> > +++ b/include/hw/misc/riscv_iopmp.h
> > @@ -0,0 +1,168 @@
> > +/*
> > + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> > + *
> > + * Copyright (c) 2023-2024 Andes Tech. Corp.
> > + *
> > + * SPDX-License-Identifier: GPL-2.0-or-later
> > + *
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms and conditions of the GNU General Public License,
> > + * version 2 or later, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> > + * more details.
> > + *
> > + * You should have received a copy of the GNU General Public License along with
> > + * this program. If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#ifndef RISCV_IOPMP_H
> > +#define RISCV_IOPMP_H
> > +
> > +#include "hw/sysbus.h"
> > +#include "qemu/typedefs.h"
> > +#include "memory.h"
> > +#include "exec/hwaddr.h"
> > +
> > +#define TYPE_IOPMP "iopmp"
> > +#define IOPMP(obj) OBJECT_CHECK(IopmpState, (obj), TYPE_IOPMP)
> > +
> > +#define IOPMP_MAX_MD_NUM 63
> > +#define IOPMP_MAX_RRID_NUM 65535
> > +#define IOPMP_MAX_ENTRY_NUM 65535
> > +
> > +#define VENDER_VIRT 0
> > +#define SPECVER_0_9_1 91
> > +#define IMPID_0_9_1 91
> > +
> > +#define RRE_ERROR 0
> > +#define RRE_SUCCESS_VALUE 1
> > +
> > +#define RWE_ERROR 0
> > +#define RWE_SUCCESS 1
> > +
> > +#define RXE_ERROR 0
> > +#define RXE_SUCCESS_VALUE 1
> > +
> > +#define ERR_REQINFO_TTYPE_READ 1
> > +#define ERR_REQINFO_TTYPE_WRITE 2
> > +#define ERR_REQINFO_TTYPE_FETCH 3
> > +#define ERR_REQINFO_ETYPE_NOERROR 0
> > +#define ERR_REQINFO_ETYPE_READ 1
> > +#define ERR_REQINFO_ETYPE_WRITE 2
> > +#define ERR_REQINFO_ETYPE_FETCH 3
> > +#define ERR_REQINFO_ETYPE_PARHIT 4
> > +#define ERR_REQINFO_ETYPE_NOHIT 5
> > +#define ERR_REQINFO_ETYPE_RRID 6
> > +#define ERR_REQINFO_ETYPE_USER 7
> > +
> > +#define IOPMP_MODEL_FULL 0
> > +#define IOPMP_MODEL_RAPIDK 0x1
> > +#define IOPMP_MODEL_DYNAMICK 0x2
> > +#define IOPMP_MODEL_ISOLATION 0x3
> > +#define IOPMP_MODEL_COMPACTK 0x4
> > +
> > +#define ENTRY_NO_HIT 0
> > +#define ENTRY_PAR_HIT 1
> > +#define ENTRY_HIT 2
>
> Why not an enum?
Thanks for your suggestion. It will be changed to enum in next revision.
Thanks,
Ethan Chen
>
> Alistair
>
> > +
> > +/* The generic iopmp address space which downstream is system memory */
> > +extern AddressSpace iopmp_container_as;
> > +
> > +typedef enum {
> > + IOPMP_AMATCH_OFF, /* Null (off) */
> > + IOPMP_AMATCH_TOR, /* Top of Range */
> > + IOPMP_AMATCH_NA4, /* Naturally aligned four-byte region */
> > + IOPMP_AMATCH_NAPOT /* Naturally aligned power-of-two region */
> > +} iopmp_am_t;
> > +
> > +typedef enum {
> > + IOPMP_ACCESS_READ = 1,
> > + IOPMP_ACCESS_WRITE = 2,
> > + IOPMP_ACCESS_FETCH = 3
> > +} iopmp_access_type;
> > +
> > +typedef enum {
> > + IOPMP_NONE = 0,
> > + IOPMP_RO = 1,
> > + IOPMP_WO = 2,
> > + IOPMP_RW = 3,
> > + IOPMP_XO = 4,
> > + IOPMP_RX = 5,
> > + IOPMP_WX = 6,
> > + IOPMP_RWX = 7,
> > +} iopmp_permission;
> > +
> > +typedef struct {
> > + uint32_t addr_reg;
> > + uint32_t addrh_reg;
> > + uint32_t cfg_reg;
> > +} iopmp_entry_t;
> > +
> > +typedef struct {
> > + uint64_t sa;
> > + uint64_t ea;
> > +} iopmp_addr_t;
> > +
> > +typedef struct {
> > + uint32_t *srcmd_en;
> > + uint32_t *srcmd_enh;
> > + uint32_t *mdcfg;
> > + iopmp_entry_t *entry;
> > + uint32_t mdlck;
> > + uint32_t mdlckh;
> > + uint32_t entrylck;
> > + uint32_t mdcfglck;
> > + uint32_t mdstall;
> > + uint32_t mdstallh;
> > + uint32_t rridscp;
> > + uint32_t err_cfg;
> > + uint64_t err_reqaddr;
> > + uint32_t err_reqid;
> > + uint32_t err_reqinfo;
> > +} iopmp_regs;
> > +
> > +
> > +/* To detect partially hit */
> > +typedef struct iopmp_transaction_state {
> > + bool running;
> > + bool supported;
> > + hwaddr start_addr;
> > + hwaddr end_addr;
> > +} iopmp_transaction_state;
> > +
> > +typedef struct IopmpState {
> > + SysBusDevice parent_obj;
> > + iopmp_addr_t *entry_addr;
> > + MemoryRegion mmio;
> > + IOMMUMemoryRegion iommu;
> > + IOMMUMemoryRegion *next_iommu;
> > + iopmp_regs regs;
> > + MemoryRegion *downstream;
> > + MemoryRegion blocked_r, blocked_w, blocked_x, blocked_rw, blocked_rx,
> > + blocked_wx, blocked_rwx;
> > + MemoryRegion stall_io;
> > + uint32_t model;
> > + uint32_t k;
> > + bool prient_prog;
> > + bool default_prient_prog;
> > + iopmp_transaction_state *transaction_state;
> > + QemuMutex iopmp_transaction_mutex;
> > +
> > + AddressSpace downstream_as;
> > + AddressSpace blocked_r_as, blocked_w_as, blocked_x_as, blocked_rw_as,
> > + blocked_rx_as, blocked_wx_as, blocked_rwx_as;
> > + qemu_irq irq;
> > + bool enable;
> > +
> > + uint32_t prio_entry;
> > + uint32_t rrid_num;
> > + uint32_t md_num;
> > + uint32_t entry_num;
> > + uint32_t entry_offset;
> > + uint32_t fabricated_v;
> > +} IopmpState;
> > +
> > +#endif
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory
2024-08-08 4:23 ` Alistair Francis
@ 2024-08-09 10:11 ` Ethan Chen via
2024-08-12 0:47 ` Alistair Francis
0 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-08-09 10:11 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Thu, Aug 08, 2024 at 02:23:56PM +1000, Alistair Francis wrote:
>
> On Mon, Jul 15, 2024 at 8:13 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> >
> > To enable system memory transactions through the IOPMP, memory regions must
> > be moved to the IOPMP downstream and then replaced with IOMMUs for IOPMP
> > translation.
> >
> > The iopmp_setup_system_memory() function copies subregions of system memory
> > to create the IOPMP downstream and then replaces the specified memory
> > regions in system memory with the IOMMU regions of the IOPMP. It also
> > adds entries to a protection map that records the relationship between
> > physical address regions and the IOPMP, which is used by the IOPMP DMA
> > API to send transaction information.
> >
> > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > ---
> > hw/misc/riscv_iopmp.c | 61 +++++++++++++++++++++++++++++++++++
> > include/hw/misc/riscv_iopmp.h | 3 ++
> > 2 files changed, 64 insertions(+)
> >
> > diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> > index db43e3c73f..e62ac57437 100644
> > --- a/hw/misc/riscv_iopmp.c
> > +++ b/hw/misc/riscv_iopmp.c
> > @@ -1151,4 +1151,65 @@ iopmp_register_types(void)
> > type_register_static(&iopmp_iommu_memory_region_info);
> > }
> >
> > +/*
> > + * Copies subregions from the source memory region to the destination memory
> > + * region
> > + */
> > +static void copy_memory_subregions(MemoryRegion *src_mr, MemoryRegion *dst_mr)
> > +{
> > + int32_t priority;
> > + hwaddr addr;
> > + MemoryRegion *alias, *subregion;
> > + QTAILQ_FOREACH(subregion, &src_mr->subregions, subregions_link) {
> > + priority = subregion->priority;
> > + addr = subregion->addr;
> > + alias = g_malloc0(sizeof(MemoryRegion));
> > + memory_region_init_alias(alias, NULL, subregion->name, subregion, 0,
> > + memory_region_size(subregion));
> > + memory_region_add_subregion_overlap(dst_mr, addr, alias, priority);
> > + }
> > +}
>
> This seems strange. Do we really need to do this?
>
> I haven't looked at the memory_region stuff for awhile, but this seems
> clunky and prone to breakage.
>
> We already link s->iommu with the system memory
>
s->iommu occupies the address of the protected devices in system
memory. Since IOPMP does not alter address, the target address space
must differ from system memory to avoid infinite recursive iommu access.
The transaction will be redirected to a downstream memory region, which
is almost identical to system memory but without the iommu memory
region of IOPMP.
This function serves as a helper to create that downstream memory region.
Thanks,
Ethan Chen
> Alistair
>
> > +
> > +/*
> > + * Create downstream of system memory for IOPMP, and overlap memory region
> > + * specified in memmap with IOPMP translator. Make sure subregions are added to
> > + * system memory before call this function. It also add entry to
> > + * iopmp_protection_memmaps for recording the relationship between physical
> > + * address regions and IOPMP.
> > + */
> > +void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> > + uint32_t map_entry_num)
> > +{
> > + IopmpState *s = IOPMP(dev);
> > + uint32_t i;
> > + MemoryRegion *iommu_alias;
> > + MemoryRegion *target_mr = get_system_memory();
> > + MemoryRegion *downstream = g_malloc0(sizeof(MemoryRegion));
> > + memory_region_init(downstream, NULL, "iopmp_downstream",
> > + memory_region_size(target_mr));
> > + /* Copy subregions of target to downstream */
> > + copy_memory_subregions(target_mr, downstream);
> > +
> > + iopmp_protection_memmap *map;
> > + for (i = 0; i < map_entry_num; i++) {
> > + /* Memory access to protected regions of target are through IOPMP */
> > + iommu_alias = g_new(MemoryRegion, 1);
> > + memory_region_init_alias(iommu_alias, NULL, "iommu_alias",
> > + MEMORY_REGION(&s->iommu), memmap[i].base,
> > + memmap[i].size);
> > + memory_region_add_subregion_overlap(target_mr, memmap[i].base,
> > + iommu_alias, 1);
> > + /* Record which IOPMP is responsible for the region */
> > + map = g_new0(iopmp_protection_memmap, 1);
> > + map->iopmp_s = s;
> > + map->entry.base = memmap[i].base;
> > + map->entry.size = memmap[i].size;
> > + QLIST_INSERT_HEAD(&iopmp_protection_memmaps, map, list);
> > + }
> > + s->downstream = downstream;
> > + address_space_init(&s->downstream_as, s->downstream,
> > + "iopmp-downstream-as");
> > +}
> > +
> > +
> > type_init(iopmp_register_types);
> > diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> > index b8fe479108..ebe9c4bc4a 100644
> > --- a/include/hw/misc/riscv_iopmp.h
> > +++ b/include/hw/misc/riscv_iopmp.h
> > @@ -165,4 +165,7 @@ typedef struct IopmpState {
> > uint32_t fabricated_v;
> > } IopmpState;
> >
> > +void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap,
> > + uint32_t mapentry_num);
> > +
> > #endif
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support
2024-08-08 4:01 ` Alistair Francis
@ 2024-08-09 10:14 ` Ethan Chen via
2024-08-12 0:48 ` Alistair Francis
0 siblings, 1 reply; 27+ messages in thread
From: Ethan Chen via @ 2024-08-09 10:14 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Thu, Aug 08, 2024 at 02:01:13PM +1000, Alistair Francis wrote:
>
> On Mon, Jul 15, 2024 at 8:15 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> >
> > - Add 'iopmp=on' option to enable IOPMP. It adds an iopmp device virt machine
> > to protect all regions of system memory, and configures RRID of CPU.
> >
> > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > ---
> > docs/system/riscv/virt.rst | 5 +++
> > hw/riscv/Kconfig | 1 +
> > hw/riscv/virt.c | 63 ++++++++++++++++++++++++++++++++++++++
> > include/hw/riscv/virt.h | 5 ++-
> > 4 files changed, 73 insertions(+), 1 deletion(-)
> >
> > diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
> > index 9a06f95a34..9fd006ccc2 100644
> > --- a/docs/system/riscv/virt.rst
> > +++ b/docs/system/riscv/virt.rst
> > @@ -116,6 +116,11 @@ The following machine-specific options are supported:
> > having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
> > the default number of per-HART VS-level AIA IMSIC pages is 0.
> >
> > +- iopmp=[on|off]
> > +
> > + When this option is "on", an IOPMP device is added to machine. IOPMP checks
> > + memory transcations in system memory. This option is assumed to be "off".
>
> We probably should have a a little more here. You don't even mention
> that this is the rapid-k model.
I'll provide more details.
>
> It might be worth adding a `model` field, to make it easier to add
> other models in the future. Thoughts?
>
I think the IOPMP model should be a device property and not
configured here.
Thanks,
Ethan Chen
> Alistair
>
> > +
> > Running Linux kernel
> > --------------------
> >
> > diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig
> > index a2030e3a6f..0b45a5ade2 100644
> > --- a/hw/riscv/Kconfig
> > +++ b/hw/riscv/Kconfig
> > @@ -56,6 +56,7 @@ config RISCV_VIRT
> > select PLATFORM_BUS
> > select ACPI
> > select ACPI_PCI
> > + select RISCV_IOPMP
> >
> > config SHAKTI_C
> > bool
> > diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> > index bc0893e087..5a03c03c4a 100644
> > --- a/hw/riscv/virt.c
> > +++ b/hw/riscv/virt.c
> > @@ -55,6 +55,7 @@
> > #include "hw/acpi/aml-build.h"
> > #include "qapi/qapi-visit-common.h"
> > #include "hw/virtio/virtio-iommu.h"
> > +#include "hw/misc/riscv_iopmp.h"
> >
> > /* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated by QEMU. */
> > static bool virt_use_kvm_aia(RISCVVirtState *s)
> > @@ -82,6 +83,7 @@ static const MemMapEntry virt_memmap[] = {
> > [VIRT_UART0] = { 0x10000000, 0x100 },
> > [VIRT_VIRTIO] = { 0x10001000, 0x1000 },
> > [VIRT_FW_CFG] = { 0x10100000, 0x18 },
> > + [VIRT_IOPMP] = { 0x10200000, 0x100000 },
> > [VIRT_FLASH] = { 0x20000000, 0x4000000 },
> > [VIRT_IMSIC_M] = { 0x24000000, VIRT_IMSIC_MAX_SIZE },
> > [VIRT_IMSIC_S] = { 0x28000000, VIRT_IMSIC_MAX_SIZE },
> > @@ -90,6 +92,11 @@ static const MemMapEntry virt_memmap[] = {
> > [VIRT_DRAM] = { 0x80000000, 0x0 },
> > };
> >
> > +static const MemMapEntry iopmp_protect_memmap[] = {
> > + /* IOPMP protect all regions by default */
> > + {0, 0xFFFFFFFF},
> > +};
> > +
> > /* PCIe high mmio is fixed for RV32 */
> > #define VIRT32_HIGH_PCIE_MMIO_BASE 0x300000000ULL
> > #define VIRT32_HIGH_PCIE_MMIO_SIZE (4 * GiB)
> > @@ -1024,6 +1031,24 @@ static void create_fdt_virtio_iommu(RISCVVirtState *s, uint16_t bdf)
> > bdf + 1, iommu_phandle, bdf + 1, 0xffff - bdf);
> > }
> >
> > +static void create_fdt_iopmp(RISCVVirtState *s, const MemMapEntry *memmap,
> > + uint32_t irq_mmio_phandle) {
> > + g_autofree char *name = NULL;
> > + MachineState *ms = MACHINE(s);
> > +
> > + name = g_strdup_printf("/soc/iopmp@%lx", (long)memmap[VIRT_IOPMP].base);
> > + qemu_fdt_add_subnode(ms->fdt, name);
> > + qemu_fdt_setprop_string(ms->fdt, name, "compatible", "riscv_iopmp");
> > + qemu_fdt_setprop_cells(ms->fdt, name, "reg", 0x0, memmap[VIRT_IOPMP].base,
> > + 0x0, memmap[VIRT_IOPMP].size);
> > + qemu_fdt_setprop_cell(ms->fdt, name, "interrupt-parent", irq_mmio_phandle);
> > + if (s->aia_type == VIRT_AIA_TYPE_NONE) {
> > + qemu_fdt_setprop_cell(ms->fdt, name, "interrupts", IOPMP_IRQ);
> > + } else {
> > + qemu_fdt_setprop_cells(ms->fdt, name, "interrupts", IOPMP_IRQ, 0x4);
> > + }
> > +}
> > +
> > static void finalize_fdt(RISCVVirtState *s)
> > {
> > uint32_t phandle = 1, irq_mmio_phandle = 1, msi_pcie_phandle = 1;
> > @@ -1042,6 +1067,10 @@ static void finalize_fdt(RISCVVirtState *s)
> > create_fdt_uart(s, virt_memmap, irq_mmio_phandle);
> >
> > create_fdt_rtc(s, virt_memmap, irq_mmio_phandle);
> > +
> > + if (s->have_iopmp) {
> > + create_fdt_iopmp(s, virt_memmap, irq_mmio_phandle);
> > + }
> > }
> >
> > static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap)
> > @@ -1425,6 +1454,7 @@ static void virt_machine_init(MachineState *machine)
> > DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip;
> > int i, base_hartid, hart_count;
> > int socket_count = riscv_socket_count(machine);
> > + int cpu, socket;
> >
> > /* Check socket count limit */
> > if (VIRT_SOCKETS_MAX < socket_count) {
> > @@ -1606,6 +1636,19 @@ static void virt_machine_init(MachineState *machine)
> > }
> > virt_flash_map(s, system_memory);
> >
> > + if (s->have_iopmp) {
> > + DeviceState *iopmp_dev = sysbus_create_simple(TYPE_IOPMP,
> > + memmap[VIRT_IOPMP].base,
> > + qdev_get_gpio_in(DEVICE(mmio_irqchip), IOPMP_IRQ));
> > +
> > + for (socket = 0; socket < socket_count; socket++) {
> > + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> > + iopmp_setup_cpu(&s->soc[socket].harts[cpu], 0);
> > + }
> > + }
> > + iopmp_setup_system_memory(iopmp_dev, iopmp_protect_memmap, 1);
> > + }
> > +
> > /* load/create device tree */
> > if (machine->dtb) {
> > machine->fdt = load_device_tree(machine->dtb, &s->fdt_size);
> > @@ -1702,6 +1745,20 @@ static void virt_set_aclint(Object *obj, bool value, Error **errp)
> > s->have_aclint = value;
> > }
> >
> > +static bool virt_get_iopmp(Object *obj, Error **errp)
> > +{
> > + RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
> > +
> > + return s->have_iopmp;
> > +}
> > +
> > +static void virt_set_iopmp(Object *obj, bool value, Error **errp)
> > +{
> > + RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
> > +
> > + s->have_iopmp = value;
> > +}
> > +
> > bool virt_is_acpi_enabled(RISCVVirtState *s)
> > {
> > return s->acpi != ON_OFF_AUTO_OFF;
> > @@ -1814,6 +1871,12 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
> > NULL, NULL);
> > object_class_property_set_description(oc, "acpi",
> > "Enable ACPI");
> > +
> > + object_class_property_add_bool(oc, "iopmp", virt_get_iopmp,
> > + virt_set_iopmp);
> > + object_class_property_set_description(oc, "iopmp",
> > + "Set on/off to enable/disable "
> > + "iopmp device");
> > }
> >
> > static const TypeInfo virt_machine_typeinfo = {
> > diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
> > index c0dc41ff9a..009b4ebea7 100644
> > --- a/include/hw/riscv/virt.h
> > +++ b/include/hw/riscv/virt.h
> > @@ -55,6 +55,7 @@ struct RISCVVirtState {
> >
> > int fdt_size;
> > bool have_aclint;
> > + bool have_iopmp;
> > RISCVVirtAIAType aia_type;
> > int aia_guests;
> > char *oem_id;
> > @@ -84,12 +85,14 @@ enum {
> > VIRT_PCIE_MMIO,
> > VIRT_PCIE_PIO,
> > VIRT_PLATFORM_BUS,
> > - VIRT_PCIE_ECAM
> > + VIRT_PCIE_ECAM,
> > + VIRT_IOPMP,
> > };
> >
> > enum {
> > UART0_IRQ = 10,
> > RTC_IRQ = 11,
> > + IOPMP_IRQ = 12,
> > VIRTIO_IRQ = 1, /* 1 to 8 */
> > VIRTIO_COUNT = 8,
> > PCIE_IRQ = 0x20, /* 32 to 35 */
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device
2024-08-09 9:42 ` Ethan Chen via
@ 2024-08-12 0:42 ` Alistair Francis
0 siblings, 0 replies; 27+ messages in thread
From: Alistair Francis @ 2024-08-12 0:42 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Fri, Aug 9, 2024 at 7:42 PM Ethan Chen <ethan84@andestech.com> wrote:
>
> On Thu, Aug 08, 2024 at 01:56:35PM +1000, Alistair Francis wrote:
> > [EXTERNAL MAIL]
> >
> > On Mon, Jul 15, 2024 at 7:58 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> > >
> > > Support basic functions of IOPMP specification v0.9.1 rapid-k model.
> > > The specification url:
> > > https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
> > >
> > > The IOPMP checks whether memory access from a device or CPU is valid.
> > > This implementation uses an IOMMU to modify the address space accessed
> > > by the device.
> > >
> > > For device access with IOMMUAccessFlags specifying read or write
> > > (IOMMU_RO or IOMMU_WO), the IOPMP checks the permission in
> > > iopmp_translate. If the access is valid, the target address space is
> > > downstream_as. If the access is blocked, it will be redirected to
> > > blocked_rwx_as.
> > >
> > > For CPU access with IOMMUAccessFlags not specifying read or write
> > > (IOMMU_NONE), the IOPMP translates the access to the corresponding
> > > address space based on the permission. If the access has full permission
> > > (rwx), the target address space is downstream_as. If the access has
> > > limited permission, the target address space is blocked_ followed by
> > > the lacked permissions.
> > >
> > > The operation of a blocked region can trigger an IOPMP interrupt, a bus
> > > error, or it can respond with success and fabricated data, depending on
> > > the value of the IOPMP ERR_CFG register.
> > >
> > > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > > ---
> > > hw/misc/Kconfig | 3 +
> > > hw/misc/meson.build | 1 +
> > > hw/misc/riscv_iopmp.c | 1154 +++++++++++++++++++++++++++++++++
> > > hw/misc/trace-events | 3 +
> > > include/hw/misc/riscv_iopmp.h | 168 +++++
> > > 5 files changed, 1329 insertions(+)
> > > create mode 100644 hw/misc/riscv_iopmp.c
> > > create mode 100644 include/hw/misc/riscv_iopmp.h
> > >
> > > diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
> > > index 1e08785b83..427f0c702e 100644
> > > --- a/hw/misc/Kconfig
> > > +++ b/hw/misc/Kconfig
> > > @@ -213,4 +213,7 @@ config IOSB
> > > config XLNX_VERSAL_TRNG
> > > bool
> > >
> > > +config RISCV_IOPMP
> > > + bool
> > > +
> > > source macio/Kconfig
> > > diff --git a/hw/misc/meson.build b/hw/misc/meson.build
> > > index 2ca8717be2..d9006e1d81 100644
> > > --- a/hw/misc/meson.build
> > > +++ b/hw/misc/meson.build
> > > @@ -34,6 +34,7 @@ system_ss.add(when: 'CONFIG_SIFIVE_E_PRCI', if_true: files('sifive_e_prci.c'))
> > > system_ss.add(when: 'CONFIG_SIFIVE_E_AON', if_true: files('sifive_e_aon.c'))
> > > system_ss.add(when: 'CONFIG_SIFIVE_U_OTP', if_true: files('sifive_u_otp.c'))
> > > system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c'))
> > > +specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp.c'))
> > >
> > > subdir('macio')
> > >
> > > diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> > > new file mode 100644
> > > index 0000000000..db43e3c73f
> > > --- /dev/null
> > > +++ b/hw/misc/riscv_iopmp.c
> > > @@ -0,0 +1,1154 @@
> > > +/*
> > > + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> > > + *
> > > + * Copyright (c) 2023-2024 Andes Tech. Corp.
> > > + *
> > > + * SPDX-License-Identifier: GPL-2.0-or-later
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify it
> > > + * under the terms and conditions of the GNU General Public License,
> > > + * version 2 or later, as published by the Free Software Foundation.
> > > + *
> > > + * This program is distributed in the hope it will be useful, but WITHOUT
> > > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> > > + * more details.
> > > + *
> > > + * this program. If not, see <http://www.gnu.org/licenses/>.
> > > + */
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include "qemu/log.h"
> > > +#include "qapi/error.h"
> > > +#include "trace.h"
> > > +#include "exec/exec-all.h"
> > > +#include "exec/address-spaces.h"
> > > +#include "hw/qdev-properties.h"
> > > +#include "hw/sysbus.h"
> > > +#include "hw/misc/riscv_iopmp.h"
> > > +#include "memory.h"
> > > +#include "hw/irq.h"
> > > +#include "hw/registerfields.h"
> > > +#include "trace.h"
> > > +
> > > +#define TYPE_IOPMP_IOMMU_MEMORY_REGION "iopmp-iommu-memory-region"
> > > +
> > > +REG32(VERSION, 0x00)
> > > + FIELD(VERSION, VENDOR, 0, 24)
> > > + FIELD(VERSION, SPECVER , 24, 8)
> > > +REG32(IMP, 0x04)
> > > + FIELD(IMP, IMPID, 0, 32)
> > > +REG32(HWCFG0, 0x08)
> > > + FIELD(HWCFG0, MODEL, 0, 4)
> > > + FIELD(HWCFG0, TOR_EN, 4, 1)
> > > + FIELD(HWCFG0, SPS_EN, 5, 1)
> > > + FIELD(HWCFG0, USER_CFG_EN, 6, 1)
> > > + FIELD(HWCFG0, PRIENT_PROG, 7, 1)
> > > + FIELD(HWCFG0, RRID_TRANSL_EN, 8, 1)
> > > + FIELD(HWCFG0, RRID_TRANSL_PROG, 9, 1)
> > > + FIELD(HWCFG0, CHK_X, 10, 1)
> > > + FIELD(HWCFG0, NO_X, 11, 1)
> > > + FIELD(HWCFG0, NO_W, 12, 1)
> > > + FIELD(HWCFG0, STALL_EN, 13, 1)
> > > + FIELD(HWCFG0, PEIS, 14, 1)
> > > + FIELD(HWCFG0, PEES, 15, 1)
> > > + FIELD(HWCFG0, MFR_EN, 16, 1)
> > > + FIELD(HWCFG0, MD_NUM, 24, 7)
> > > + FIELD(HWCFG0, ENABLE, 31, 1)
> > > +REG32(HWCFG1, 0x0C)
> > > + FIELD(HWCFG1, RRID_NUM, 0, 16)
> > > + FIELD(HWCFG1, ENTRY_NUM, 16, 16)
> > > +REG32(HWCFG2, 0x10)
> > > + FIELD(HWCFG2, PRIO_ENTRY, 0, 16)
> > > + FIELD(HWCFG2, RRID_TRANSL, 16, 16)
> > > +REG32(ENTRYOFFSET, 0x14)
> > > + FIELD(ENTRYOFFSET, OFFSET, 0, 32)
> > > +REG32(MDSTALL, 0x30)
> > > + FIELD(MDSTALL, EXEMPT, 0, 1)
> > > + FIELD(MDSTALL, MD, 1, 31)
> > > +REG32(MDSTALLH, 0x34)
> > > + FIELD(MDSTALLH, MD, 0, 32)
> > > +REG32(RRIDSCP, 0x38)
> > > + FIELD(RRIDSCP, RRID, 0, 16)
> > > + FIELD(RRIDSCP, OP, 30, 2)
> > > +REG32(MDLCK, 0x40)
> > > + FIELD(MDLCK, L, 0, 1)
> > > + FIELD(MDLCK, MD, 1, 31)
> > > +REG32(MDLCKH, 0x44)
> > > + FIELD(MDLCKH, MDH, 0, 32)
> > > +REG32(MDCFGLCK, 0x48)
> > > + FIELD(MDCFGLCK, L, 0, 1)
> > > + FIELD(MDCFGLCK, F, 1, 7)
> > > +REG32(ENTRYLCK, 0x4C)
> > > + FIELD(ENTRYLCK, L, 0, 1)
> > > + FIELD(ENTRYLCK, F, 1, 16)
> > > +REG32(ERR_CFG, 0x60)
> > > + FIELD(ERR_CFG, L, 0, 1)
> > > + FIELD(ERR_CFG, IE, 1, 1)
> > > + FIELD(ERR_CFG, IRE, 2, 1)
> > > + FIELD(ERR_CFG, IWE, 3, 1)
> > > + FIELD(ERR_CFG, IXE, 4, 1)
> > > + FIELD(ERR_CFG, RRE, 5, 1)
> > > + FIELD(ERR_CFG, RWE, 6, 1)
> > > + FIELD(ERR_CFG, RXE, 7, 1)
> > > +REG32(ERR_REQINFO, 0x64)
> > > + FIELD(ERR_REQINFO, V, 0, 1)
> > > + FIELD(ERR_REQINFO, TTYPE, 1, 2)
> > > + FIELD(ERR_REQINFO, ETYPE, 4, 3)
> > > + FIELD(ERR_REQINFO, SVC, 7, 1)
> > > +REG32(ERR_REQADDR, 0x68)
> > > + FIELD(ERR_REQADDR, ADDR, 0, 32)
> > > +REG32(ERR_REQADDRH, 0x6C)
> > > + FIELD(ERR_REQADDRH, ADDRH, 0, 32)
> > > +REG32(ERR_REQID, 0x70)
> > > + FIELD(ERR_REQID, RRID, 0, 16)
> > > + FIELD(ERR_REQID, EID, 16, 16)
> > > +REG32(ERR_MFR, 0x74)
> > > + FIELD(ERR_MFR, SVW, 0, 16)
> > > + FIELD(ERR_MFR, SVI, 16, 12)
> > > + FIELD(ERR_MFR, SVS, 31, 1)
> > > +REG32(MDCFG0, 0x800)
> > > + FIELD(MDCFG0, T, 0, 16)
> > > +REG32(SRCMD_EN0, 0x1000)
> > > + FIELD(SRCMD_EN0, L, 0, 1)
> > > + FIELD(SRCMD_EN0, MD, 1, 31)
> > > +REG32(SRCMD_ENH0, 0x1004)
> > > + FIELD(SRCMD_ENH0, MDH, 0, 32)
> > > +REG32(SRCMD_R0, 0x1008)
> > > + FIELD(SRCMD_R0, MD, 1, 31)
> > > +REG32(SRCMD_RH0, 0x100C)
> > > + FIELD(SRCMD_RH0, MDH, 0, 32)
> > > +REG32(SRCMD_W0, 0x1010)
> > > + FIELD(SRCMD_W0, MD, 1, 31)
> > > +REG32(SRCMD_WH0, 0x1014)
> > > + FIELD(SRCMD_WH0, MDH, 0, 32)
> > > +
> > > +FIELD(ENTRY_ADDR, ADDR, 0, 32)
> > > +FIELD(ENTRY_ADDRH, ADDRH, 0, 32)
> > > +
> > > +FIELD(ENTRY_CFG, R, 0, 1)
> > > +FIELD(ENTRY_CFG, W, 1, 1)
> > > +FIELD(ENTRY_CFG, X, 2, 1)
> > > +FIELD(ENTRY_CFG, A, 3, 2)
> > > +FIELD(ENTRY_CFG, SIRE, 5, 1)
> > > +FIELD(ENTRY_CFG, SIWE, 6, 1)
> > > +FIELD(ENTRY_CFG, SIXE, 7, 1)
> > > +FIELD(ENTRY_CFG, SERE, 8, 1)
> > > +FIELD(ENTRY_CFG, SEWE, 9, 1)
> > > +FIELD(ENTRY_CFG, SEXE, 10, 1)
> > > +
> > > +FIELD(ENTRY_USER_CFG, IM, 0, 32)
> > > +
> > > +/* Offsets to SRCMD_EN(i) */
> > > +#define SRCMD_EN_OFFSET 0x0
> > > +#define SRCMD_ENH_OFFSET 0x4
> > > +#define SRCMD_R_OFFSET 0x8
> > > +#define SRCMD_RH_OFFSET 0xC
> > > +#define SRCMD_W_OFFSET 0x10
> > > +#define SRCMD_WH_OFFSET 0x14
> > > +
> > > +/* Offsets to ENTRY_ADDR(i) */
> > > +#define ENTRY_ADDR_OFFSET 0x0
> > > +#define ENTRY_ADDRH_OFFSET 0x4
> > > +#define ENTRY_CFG_OFFSET 0x8
> > > +#define ENTRY_USER_CFG_OFFSET 0xC
> > > +
> > > +/* Memmap for parallel IOPMPs */
> > > +typedef struct iopmp_protection_memmap {
> > > + MemMapEntry entry;
> > > + IopmpState *iopmp_s;
> > > + QLIST_ENTRY(iopmp_protection_memmap) list;
> > > +} iopmp_protection_memmap;
> > > +QLIST_HEAD(, iopmp_protection_memmap)
> > > + iopmp_protection_memmaps = QLIST_HEAD_INITIALIZER(iopmp_protection_memmaps);
> > > +
> > > +static void iopmp_iommu_notify(IopmpState *s)
> > > +{
> > > + IOMMUTLBEvent event = {
> > > + .entry = {
> > > + .iova = 0,
> > > + .translated_addr = 0,
> > > + .addr_mask = -1ULL,
> > > + .perm = IOMMU_NONE,
> > > + },
> > > + .type = IOMMU_NOTIFIER_UNMAP,
> > > + };
> > > +
> > > + for (int i = 0; i < s->rrid_num; i++) {
> > > + memory_region_notify_iommu(&s->iommu, i, event);
> > > + }
> > > +}
> > > +
> > > +static void iopmp_decode_napot(uint64_t a, uint64_t *sa,
> > > + uint64_t *ea)
> > > +{
> > > + /*
> > > + * aaaa...aaa0 8-byte NAPOT range
> > > + * aaaa...aa01 16-byte NAPOT range
> > > + * aaaa...a011 32-byte NAPOT range
> > > + * ...
> > > + * aa01...1111 2^XLEN-byte NAPOT range
> > > + * a011...1111 2^(XLEN+1)-byte NAPOT range
> > > + * 0111...1111 2^(XLEN+2)-byte NAPOT range
> > > + * 1111...1111 Reserved
> > > + */
> > > +
> > > + a = (a << 2) | 0x3;
> > > + *sa = a & (a + 1);
> > > + *ea = a | (a + 1);
> > > +}
> > > +
> > > +static void iopmp_update_rule(IopmpState *s, uint32_t entry_index)
> > > +{
> > > + uint8_t this_cfg = s->regs.entry[entry_index].cfg_reg;
> > > + uint64_t this_addr = s->regs.entry[entry_index].addr_reg |
> > > + ((uint64_t)s->regs.entry[entry_index].addrh_reg << 32);
> > > + uint64_t prev_addr = 0u;
> > > + uint64_t sa = 0u;
> > > + uint64_t ea = 0u;
> > > +
> > > + if (entry_index >= 1u) {
> > > + prev_addr = s->regs.entry[entry_index - 1].addr_reg |
> > > + ((uint64_t)s->regs.entry[entry_index - 1].addrh_reg << 32);
> > > + }
> > > +
> > > + switch (FIELD_EX32(this_cfg, ENTRY_CFG, A)) {
> > > + case IOPMP_AMATCH_OFF:
> > > + sa = 0u;
> > > + ea = -1;
> > > + break;
> > > +
> > > + case IOPMP_AMATCH_TOR:
> > > + sa = (prev_addr) << 2; /* shift up from [xx:0] to [xx+2:2] */
> > > + ea = ((this_addr) << 2) - 1u;
> > > + if (sa > ea) {
> > > + sa = ea = 0u;
> > > + }
> > > + break;
> > > +
> > > + case IOPMP_AMATCH_NA4:
> > > + sa = this_addr << 2; /* shift up from [xx:0] to [xx+2:2] */
> > > + ea = (sa + 4u) - 1u;
> > > + break;
> > > +
> > > + case IOPMP_AMATCH_NAPOT:
> > > + iopmp_decode_napot(this_addr, &sa, &ea);
> > > + break;
> > > +
> > > + default:
> > > + sa = 0u;
> > > + ea = 0u;
> > > + break;
> > > + }
> > > +
> > > + s->entry_addr[entry_index].sa = sa;
> > > + s->entry_addr[entry_index].ea = ea;
> > > + iopmp_iommu_notify(s);
> > > +}
> > > +
> > > +static uint64_t iopmp_read(void *opaque, hwaddr addr, unsigned size)
> > > +{
> > > + IopmpState *s = IOPMP(opaque);
> > > + uint32_t rz = 0;
> > > + uint32_t offset, idx;
> > > +
> > > + switch (addr) {
> > > + case A_VERSION:
> > > + rz = VENDER_VIRT << R_VERSION_VENDOR_SHIFT |
> > > + SPECVER_0_9_1 << R_VERSION_SPECVER_SHIFT;
> >
> > It would be better to use the FIELD_DP32() macro instead of the manual shifts
>
> It will be refined in next revision.
>
> >
> > > + break;
> > > + case A_IMP:
> > > + rz = IMPID_0_9_1;
> > > + break;
> > > + case A_HWCFG0:
> > > + rz = s->model << R_HWCFG0_MODEL_SHIFT |
> > > + 1 << R_HWCFG0_TOR_EN_SHIFT |
> > > + 0 << R_HWCFG0_SPS_EN_SHIFT |
> > > + 0 << R_HWCFG0_USER_CFG_EN_SHIFT |
> > > + s->prient_prog << R_HWCFG0_PRIENT_PROG_SHIFT |
> > > + 0 << R_HWCFG0_RRID_TRANSL_EN_SHIFT |
> > > + 0 << R_HWCFG0_RRID_TRANSL_PROG_SHIFT |
> > > + 1 << R_HWCFG0_CHK_X_SHIFT |
> > > + 0 << R_HWCFG0_NO_X_SHIFT |
> > > + 0 << R_HWCFG0_NO_W_SHIFT |
> > > + 0 << R_HWCFG0_STALL_EN_SHIFT |
> > > + 0 << R_HWCFG0_PEIS_SHIFT |
> > > + 0 << R_HWCFG0_PEES_SHIFT |
> > > + 0 << R_HWCFG0_MFR_EN_SHIFT |
> > > + s->md_num << R_HWCFG0_MD_NUM_SHIFT |
> > > + s->enable << R_HWCFG0_ENABLE_SHIFT ;
> > > + break;
> > > + case A_HWCFG1:
> > > + rz = s->rrid_num << R_HWCFG1_RRID_NUM_SHIFT |
> > > + s->entry_num << R_HWCFG1_ENTRY_NUM_SHIFT;
> > > + break;
> > > + case A_HWCFG2:
> > > + rz = s->prio_entry << R_HWCFG2_PRIO_ENTRY_SHIFT;
> > > + break;
> > > + case A_ENTRYOFFSET:
> > > + rz = s->entry_offset;
> > > + break;
> > > + case A_ERR_CFG:
> > > + rz = s->regs.err_cfg;
> > > + break;
> > > + case A_MDLCK:
> > > + rz = s->regs.mdlck;
> > > + break;
> > > + case A_MDLCKH:
> > > + rz = s->regs.mdlckh;
> > > + break;
> > > + case A_MDCFGLCK:
> > > + rz = s->regs.mdcfglck;
> > > + break;
> > > + case A_ENTRYLCK:
> > > + rz = s->regs.entrylck;
> > > + break;
> > > + case A_ERR_REQADDR:
> > > + rz = s->regs.err_reqaddr & UINT32_MAX;
> > > + break;
> > > + case A_ERR_REQADDRH:
> > > + rz = s->regs.err_reqaddr >> 32;
> > > + break;
> > > + case A_ERR_REQID:
> > > + rz = s->regs.err_reqid;
> > > + break;
> > > + case A_ERR_REQINFO:
> > > + rz = s->regs.err_reqinfo;
> > > + break;
> > > +
> > > + default:
> > > + if (addr >= A_MDCFG0 &&
> > > + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> > > + offset = addr - A_MDCFG0;
> > > + idx = offset >> 2;
> > > + if (idx == 0 && offset == 0) {
> > > + rz = s->regs.mdcfg[idx];
> > > + } else {
> > > + /* Only MDCFG0 is implemented in rapid-k model */
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > > + __func__, (int)addr);
> > > + }
> > > + } else if (addr >= A_SRCMD_EN0 &&
> > > + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> > > + offset = addr - A_SRCMD_EN0;
> > > + idx = offset >> 5;
> > > + offset &= 0x1f;
> > > +
> > > + switch (offset) {
> > > + case SRCMD_EN_OFFSET:
> > > + rz = s->regs.srcmd_en[idx];
> > > + break;
> > > + case SRCMD_ENH_OFFSET:
> > > + rz = s->regs.srcmd_enh[idx];
> > > + break;
> > > + default:
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > > + __func__, (int)addr);
> > > + break;
> > > + }
> > > + } else if (addr >= s->entry_offset &&
> > > + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET +
> > > + 16 * (s->entry_num - 1)) {
> > > + offset = addr - s->entry_offset;
> > > + idx = offset >> 4;
> > > + offset &= 0xf;
> > > +
> > > + switch (offset) {
> > > + case ENTRY_ADDR_OFFSET:
> > > + rz = s->regs.entry[idx].addr_reg;
> > > + break;
> > > + case ENTRY_ADDRH_OFFSET:
> > > + rz = s->regs.entry[idx].addrh_reg;
> > > + break;
> > > + case ENTRY_CFG_OFFSET:
> > > + rz = s->regs.entry[idx].cfg_reg;
> > > + break;
> > > + case ENTRY_USER_CFG_OFFSET:
> > > + /* Does not support user customized permission */
> > > + rz = 0;
> > > + break;
> > > + default:
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > > + __func__, (int)addr);
> > > + break;
> > > + }
> > > + } else {
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > > + __func__, (int)addr);
> > > + }
> > > + break;
> > > + }
> > > + trace_iopmp_read(addr, rz);
> > > + return rz;
> > > +}
> > > +
> > > +static void
> > > +iopmp_write(void *opaque, hwaddr addr, uint64_t value, unsigned size)
> > > +{
> > > + IopmpState *s = IOPMP(opaque);
> > > + uint32_t offset, idx;
> > > + uint32_t value32 = value;
> > > +
> > > + trace_iopmp_write(addr, value32);
> > > +
> > > + switch (addr) {
> > > + case A_VERSION: /* RO */
> > > + break;
> > > + case A_IMP: /* RO */
> > > + break;
> > > + case A_HWCFG0:
> > > + if (FIELD_EX32(value32, HWCFG0, PRIENT_PROG)) {
> > > + /* W1C */
> > > + s->prient_prog = 0;
> > > + }
> > > + if (FIELD_EX32(value32, HWCFG0, ENABLE)) {
> > > + /* W1S */
> > > + s->enable = 1;
> > > + iopmp_iommu_notify(s);
> > > + }
> > > + break;
> > > + case A_HWCFG1: /* RO */
> > > + break;
> > > + case A_HWCFG2:
> > > + if (s->prient_prog) {
> > > + s->prio_entry = FIELD_EX32(value32, HWCFG2, PRIO_ENTRY);
> > > + }
> > > + break;
> > > + case A_ERR_CFG:
> > > + if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) {
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, L,
> > > + FIELD_EX32(value32, ERR_CFG, L));
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IE,
> > > + FIELD_EX32(value32, ERR_CFG, IE));
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IRE,
> > > + FIELD_EX32(value32, ERR_CFG, IRE));
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RRE,
> > > + FIELD_EX32(value32, ERR_CFG, RRE));
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IWE,
> > > + FIELD_EX32(value32, ERR_CFG, IWE));
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RWE,
> > > + FIELD_EX32(value32, ERR_CFG, RWE));
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IXE,
> > > + FIELD_EX32(value32, ERR_CFG, IXE));
> > > + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RXE,
> > > + FIELD_EX32(value32, ERR_CFG, RXE));
> > > + }
> > > + break;
> > > + case A_MDLCK:
> > > + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> > > + s->regs.mdlck = value32;
> > > + }
> > > + break;
> > > + case A_MDLCKH:
> > > + if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) {
> > > + s->regs.mdlckh = value32;
> > > + }
> > > + break;
> > > + case A_MDCFGLCK:
> > > + if (!FIELD_EX32(s->regs.mdcfglck, MDCFGLCK, L)) {
> > > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F,
> > > + FIELD_EX32(value32, MDCFGLCK, F));
> > > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L,
> > > + FIELD_EX32(value32, MDCFGLCK, L));
> > > + }
> > > + break;
> > > + case A_ENTRYLCK:
> > > + if (!(FIELD_EX32(s->regs.entrylck, ENTRYLCK, L))) {
> > > + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, F,
> > > + FIELD_EX32(value32, ENTRYLCK, F));
> > > + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, L,
> > > + FIELD_EX32(value32, ENTRYLCK, L));
> > > + }
> > > + case A_ERR_REQADDR: /* RO */
> > > + break;
> > > + case A_ERR_REQADDRH: /* RO */
> > > + break;
> > > + case A_ERR_REQID: /* RO */
> > > + break;
> > > + case A_ERR_REQINFO:
> > > + if (FIELD_EX32(value32, ERR_REQINFO, V)) {
> > > + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo,
> > > + ERR_REQINFO, V, 0);
> > > + qemu_set_irq(s->irq, 0);
> > > + }
> > > + break;
> > > +
> > > + default:
> > > + if (addr >= A_MDCFG0 &&
> > > + addr < A_MDCFG0 + 4 * (s->md_num - 1)) {
> > > + offset = addr - A_MDCFG0;
> > > + idx = offset >> 2;
> > > + /* RO in rapid-k model */
> > > + if (idx > 0) {
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > > + __func__, (int)addr);
> > > + }
> > > + } else if (addr >= A_SRCMD_EN0 &&
> > > + addr < A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) {
> > > + offset = addr - A_SRCMD_EN0;
> > > + idx = offset >> 5;
> > > + offset &= 0x1f;
> > > +
> > > + if (offset % 4) {
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > > + __func__, (int)addr);
> > > + } else if (FIELD_EX32(s->regs.srcmd_en[idx], SRCMD_EN0, L)
> > > + == 0) {
> > > + switch (offset) {
> > > + case SRCMD_EN_OFFSET:
> > > + s->regs.srcmd_en[idx] =
> > > + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, L,
> > > + FIELD_EX32(value32, SRCMD_EN0, L));
> > > +
> > > + /* MD field is protected by mdlck */
> > > + value32 = (value32 & ~s->regs.mdlck) |
> > > + (s->regs.srcmd_en[idx] & s->regs.mdlck);
> > > + s->regs.srcmd_en[idx] =
> > > + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, MD,
> > > + FIELD_EX32(value32, SRCMD_EN0, MD));
> > > + break;
> > > + case SRCMD_ENH_OFFSET:
> > > + value32 = (value32 & ~s->regs.mdlckh) |
> > > + (s->regs.srcmd_enh[idx] & s->regs.mdlckh);
> > > + s->regs.srcmd_enh[idx] =
> > > + FIELD_DP32(s->regs.srcmd_enh[idx], SRCMD_ENH0, MDH,
> > > + value32);
> > > + break;
> > > + default:
> > > + break;
> > > + }
> > > + }
> > > + } else if (addr >= s->entry_offset &&
> > > + addr < s->entry_offset + ENTRY_USER_CFG_OFFSET
> > > + + 16 * (s->entry_num - 1)) {
> > > + offset = addr - s->entry_offset;
> > > + idx = offset >> 4;
> > > + offset &= 0xf;
> > > +
> > > + /* index < ENTRYLCK_F is protected */
> > > + if (idx >= FIELD_EX32(s->regs.entrylck, ENTRYLCK, F)) {
> > > + switch (offset) {
> > > + case ENTRY_ADDR_OFFSET:
> > > + s->regs.entry[idx].addr_reg = value32;
> > > + break;
> > > + case ENTRY_ADDRH_OFFSET:
> > > + s->regs.entry[idx].addrh_reg = value32;
> > > + break;
> > > + case ENTRY_CFG_OFFSET:
> > > + s->regs.entry[idx].cfg_reg = value32;
> > > + break;
> > > + case ENTRY_USER_CFG_OFFSET:
> > > + /* Does not support user customized permission */
> > > + break;
> > > + default:
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n",
> > > + __func__, (int)addr);
> > > + break;
> > > + }
> > > + iopmp_update_rule(s, idx);
> > > + if (idx + 1 < s->entry_num &&
> > > + FIELD_EX32(s->regs.entry[idx + 1].cfg_reg, ENTRY_CFG, A) ==
> > > + IOPMP_AMATCH_TOR) {
> > > + iopmp_update_rule(s, idx + 1);
> > > + }
> > > + }
> > > + } else {
> > > + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", __func__,
> > > + (int)addr);
> > > + }
> > > + }
> > > +}
> > > +
> > > +/* Match entry in memory domain */
> > > +static int match_entry_md(IopmpState *s, int md_idx, hwaddr start_addr,
> > > + hwaddr end_addr, int *entry_idx,
> > > + int *prior_entry_in_tlb)
> > > +{
> > > + int entry_idx_s, entry_idx_e;
> > > + int result = ENTRY_NO_HIT;
> > > + int i = 0;
> > > + hwaddr tlb_sa = start_addr & ~(TARGET_PAGE_SIZE - 1);
> > > + hwaddr tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
> > > +
> > > + entry_idx_s = md_idx * s->regs.mdcfg[0];
> > > + entry_idx_e = (md_idx + 1) * s->regs.mdcfg[0];
> > > +
> > > + if (entry_idx_s >= s->entry_num) {
> > > + return result;
> > > + }
> > > + if (entry_idx_e > s->entry_num) {
> > > + entry_idx_e = s->entry_num;
> > > + }
> > > + i = entry_idx_s;
> > > + for (i = entry_idx_s; i < entry_idx_e; i++) {
> > > + if (FIELD_EX32(s->regs.entry[i].cfg_reg, ENTRY_CFG, A) ==
> > > + IOPMP_AMATCH_OFF) {
> > > + continue;
> > > + }
> > > + if (start_addr >= s->entry_addr[i].sa &&
> > > + start_addr <= s->entry_addr[i].ea) {
> > > + /* Check end address */
> > > + if (end_addr >= s->entry_addr[i].sa &&
> > > + end_addr <= s->entry_addr[i].ea) {
> > > + *entry_idx = i;
> > > + return ENTRY_HIT;
> > > + } else if (i >= s->prio_entry) {
> > > + /* Continue for non-prio_entry */
> > > + continue;
> > > + } else {
> > > + *entry_idx = i;
> > > + return ENTRY_PAR_HIT;
> > > + }
> > > + } else if (end_addr >= s->entry_addr[i].sa &&
> > > + end_addr <= s->entry_addr[i].ea) {
> > > + /* Only end address matches the entry */
> > > + if (i >= s->prio_entry) {
> > > + continue;
> > > + } else {
> > > + *entry_idx = i;
> > > + return ENTRY_PAR_HIT;
> > > + }
> > > + } else if (start_addr < s->entry_addr[i].sa &&
> > > + end_addr > s->entry_addr[i].ea) {
> > > + if (i >= s->prio_entry) {
> > > + continue;
> > > + } else {
> > > + *entry_idx = i;
> > > + return ENTRY_PAR_HIT;
> > > + }
> > > + }
> > > + if (prior_entry_in_tlb != NULL) {
> > > + if ((s->entry_addr[i].sa >= tlb_sa &&
> > > + s->entry_addr[i].sa <= tlb_ea) ||
> > > + (s->entry_addr[i].ea >= tlb_sa &&
> > > + s->entry_addr[i].ea <= tlb_ea)) {
> > > + /*
> > > + * TLB should not use the cached result when the tlb contains
> > > + * higher priority entry
> > > + */
> > > + *prior_entry_in_tlb = 1;
> > > + }
> > > + }
> > > + }
> > > + return result;
> > > +}
> > > +
> > > +static int match_entry(IopmpState *s, int rrid, hwaddr start_addr,
> > > + hwaddr end_addr, int *match_md_idx,
> > > + int *match_entry_idx, int *prior_entry_in_tlb)
> > > +{
> > > + int cur_result = ENTRY_NO_HIT;
> > > + int result = ENTRY_NO_HIT;
> > > + /* Remove lock bit */
> > > + uint64_t srcmd_en = ((uint64_t)s->regs.srcmd_en[rrid] |
> > > + ((uint64_t)s->regs.srcmd_enh[rrid] << 32)) >> 1;
> > > +
> > > + for (int md_idx = 0; md_idx < s->md_num; md_idx++) {
> > > + if (srcmd_en & (1ULL << md_idx)) {
> > > + cur_result = match_entry_md(s, md_idx, start_addr, end_addr,
> > > + match_entry_idx, prior_entry_in_tlb);
> > > + if (cur_result == ENTRY_HIT || cur_result == ENTRY_PAR_HIT) {
> > > + *match_md_idx = md_idx;
> > > + return cur_result;
> > > + }
> > > + }
> > > + }
> > > + return result;
> > > +}
> > > +
> > > +static void iopmp_error_reaction(IopmpState *s, uint32_t id, hwaddr start,
> > > + uint32_t info)
> > > +{
> > > + if (!FIELD_EX32(s->regs.err_reqinfo, ERR_REQINFO, V)) {
> > > + s->regs.err_reqinfo = info;
> > > + s->regs.err_reqinfo = FIELD_DP32(s->regs.err_reqinfo, ERR_REQINFO, V,
> > > + 1);
> > > + s->regs.err_reqid = id;
> > > + /* addr[LEN+2:2] */
> > > + s->regs.err_reqaddr = start >> 2;
> > > +
> > > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_READ &&
> > > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IRE)) {
> > > + qemu_set_irq(s->irq, 1);
> > > + }
> > > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_WRITE &&
> > > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IWE)) {
> > > + qemu_set_irq(s->irq, 1);
> > > + }
> > > + if (FIELD_EX32(info, ERR_REQINFO, TTYPE) == ERR_REQINFO_TTYPE_FETCH &&
> > > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) &&
> > > + FIELD_EX32(s->regs.err_cfg, ERR_CFG, IXE)) {
> > > + qemu_set_irq(s->irq, 1);
> > > + }
> > > + }
> > > +}
> > > +
> > > +static IOMMUTLBEntry iopmp_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
> > > + IOMMUAccessFlags flags, int iommu_idx)
> > > +{
> > > + int rrid = iommu_idx;
> > > + IopmpState *s = IOPMP(container_of(iommu, IopmpState, iommu));
> > > + hwaddr start_addr, end_addr;
> > > + int entry_idx = -1;
> > > + int md_idx = -1;
> > > + int result;
> > > + uint32_t error_info = 0;
> > > + uint32_t error_id = 0;
> > > + int prior_entry_in_tlb = 0;
> > > + iopmp_permission iopmp_perm;
> > > + IOMMUTLBEntry entry = {
> > > + .target_as = &s->downstream_as,
> > > + .iova = addr,
> > > + .translated_addr = addr,
> > > + .addr_mask = 0,
> > > + .perm = IOMMU_NONE,
> > > + };
> > > +
> > > + if (!s->enable) {
> > > + /* Bypass IOPMP */
> > > + entry.addr_mask = -1ULL,
> > > + entry.perm = IOMMU_RW;
> > > + return entry;
> > > + }
> > > +
> > > + /* unknown RRID */
> > > + if (rrid >= s->rrid_num) {
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > > + ERR_REQINFO_ETYPE_RRID);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > > + iopmp_error_reaction(s, error_id, addr, error_info);
> > > + entry.target_as = &s->blocked_rwx_as;
> > > + entry.perm = IOMMU_RW;
> > > + return entry;
> > > + }
> > > +
> > > + if (s->transaction_state[rrid].supported == true) {
> > > + start_addr = s->transaction_state[rrid].start_addr;
> > > + end_addr = s->transaction_state[rrid].end_addr;
> > > + } else {
> > > + /* No transaction information, use the same address */
> > > + start_addr = addr;
> > > + end_addr = addr;
> > > + }
> > > +
> > > + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> > > + &prior_entry_in_tlb);
> > > + if (result == ENTRY_HIT) {
> > > + entry.addr_mask = s->entry_addr[entry_idx].ea -
> > > + s->entry_addr[entry_idx].sa;
> > > + if (prior_entry_in_tlb) {
> > > + /* Make TLB repeat iommu translation on every access */
> >
> > I don't follow this, if we have a prior entry in the TLB cache we
> > don't cache the accesses?
>
> For the cached TLB result to be used, the highest-priority entry in the TLB must
> occupy the entire TLB page. If a lower-priority entry fills the entire TLB page,
> it is still necessary to check which entry the transaction hits on each access
> to the TLB page.
Oh! When you say "prior" you mean priority, not prior. That is a
little confusing.
Maybe just write `priority_entry_in_tlb` to be clear. Also I think
it's worth including you entire response in the code comment
Alistair
>
> >
> > > + entry.addr_mask = 0;
> > > + }
> > > + iopmp_perm = s->regs.entry[entry_idx].cfg_reg & IOPMP_RWX;
> > > + if (flags) {
> > > + if ((iopmp_perm & flags) == 0) {
> > > + /* Permission denied */
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > > + ERR_REQINFO_ETYPE_READ + flags - 1);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > > + entry.target_as = &s->blocked_rwx_as;
> > > + entry.perm = IOMMU_RW;
> > > + } else {
> > > + entry.target_as = &s->downstream_as;
> > > + entry.perm = iopmp_perm;
> > > + }
> > > + } else {
> > > + /* CPU access with IOMMU_NONE flag */
> > > + if (iopmp_perm & IOPMP_XO) {
> > > + if ((iopmp_perm & IOPMP_RW) == IOPMP_RW) {
> > > + entry.target_as = &s->downstream_as;
> > > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> > > + entry.target_as = &s->blocked_w_as;
> > > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> > > + entry.target_as = &s->blocked_r_as;
> > > + } else {
> > > + entry.target_as = &s->blocked_rw_as;
> > > + }
> > > + } else {
> > > + if ((iopmp_perm & IOPMP_RW) == IOMMU_RW) {
> > > + entry.target_as = &s->blocked_x_as;
> > > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) {
> > > + entry.target_as = &s->blocked_wx_as;
> > > + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) {
> > > + entry.target_as = &s->blocked_rx_as;
> > > + } else {
> > > + entry.target_as = &s->blocked_rwx_as;
> > > + }
> > > + }
> > > + entry.perm = IOMMU_RW;
> > > + }
> > > + } else {
> > > + if (flags) {
> > > + if (result == ENTRY_PAR_HIT) {
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > > + ERR_REQINFO_ETYPE_PARHIT);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > > + } else {
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > > + ERR_REQINFO_ETYPE_NOHIT);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, flags);
> > > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > > + }
> > > + }
> > > + /* CPU access with IOMMU_NONE flag no_hit or par_hit */
> > > + entry.target_as = &s->blocked_rwx_as;
> > > + entry.perm = IOMMU_RW;
> > > + }
> > > + return entry;
> > > +}
> > > +
> > > +static const MemoryRegionOps iopmp_ops = {
> > > + .read = iopmp_read,
> > > + .write = iopmp_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 4, .max_access_size = 4}
> > > +};
> > > +
> > > +static MemTxResult iopmp_permssion_write(void *opaque, hwaddr addr,
> > > + uint64_t value, unsigned size,
> > > + MemTxAttrs attrs)
> > > +{
> > > + IopmpState *s = IOPMP(opaque);
> > > + return address_space_write(&s->downstream_as, addr, attrs, &value, size);
> > > +}
> > > +
> > > +static MemTxResult iopmp_permssion_read(void *opaque, hwaddr addr,
> > > + uint64_t *pdata, unsigned size,
> > > + MemTxAttrs attrs)
> > > +{
> > > + IopmpState *s = IOPMP(opaque);
> > > + return address_space_read(&s->downstream_as, addr, attrs, pdata, size);
> > > +}
> > > +
> > > +static MemTxResult iopmp_handle_block(void *opaque, hwaddr addr,
> > > + uint64_t *data, unsigned size,
> > > + MemTxAttrs attrs,
> > > + iopmp_access_type access_type) {
> > > + IopmpState *s = IOPMP(opaque);
> > > + int md_idx, entry_idx;
> > > + uint32_t error_info = 0;
> > > + uint32_t error_id = 0;
> > > + int rrid = attrs.requester_id;
> > > + int result;
> > > + hwaddr start_addr, end_addr;
> > > + start_addr = addr;
> > > + end_addr = addr;
> > > + result = match_entry(s, rrid, start_addr, end_addr, &md_idx, &entry_idx,
> > > + NULL);
> > > +
> > > + if (result == ENTRY_HIT) {
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > > + access_type);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> > > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > > + } else if (result == ENTRY_PAR_HIT) {
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, EID, entry_idx);
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > > + ERR_REQINFO_ETYPE_PARHIT);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE,
> > > + access_type);
> > > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > > + } else {
> > > + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, ETYPE,
> > > + ERR_REQINFO_ETYPE_NOHIT);
> > > + error_info = FIELD_DP32(error_info, ERR_REQINFO, TTYPE, access_type);
> > > + iopmp_error_reaction(s, error_id, start_addr, error_info);
> > > + }
> > > +
> > > + if (access_type == IOPMP_ACCESS_READ) {
> > > +
> > > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RRE)) {
> > > + case RRE_ERROR:
> > > + return MEMTX_ERROR;
> > > + break;
> > > + case RRE_SUCCESS_VALUE:
> > > + *data = s->fabricated_v;
> > > + return MEMTX_OK;
> > > + break;
> > > + default:
> > > + break;
> > > + }
> > > + return MEMTX_OK;
> > > + } else if (access_type == IOPMP_ACCESS_WRITE) {
> > > +
> > > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RWE)) {
> > > + case RWE_ERROR:
> > > + return MEMTX_ERROR;
> > > + break;
> > > + case RWE_SUCCESS:
> > > + return MEMTX_OK;
> > > + break;
> > > + default:
> > > + break;
> > > + }
> > > + return MEMTX_OK;
> > > + } else {
> > > +
> > > + switch (FIELD_EX32(s->regs.err_cfg, ERR_CFG, RXE)) {
> > > + case RXE_ERROR:
> > > + return MEMTX_ERROR;
> > > + break;
> > > + case RXE_SUCCESS_VALUE:
> > > + *data = s->fabricated_v;
> > > + return MEMTX_OK;
> > > + break;
> > > + default:
> > > + break;
> > > + }
> > > + return MEMTX_OK;
> > > + }
> > > + return MEMTX_OK;
> > > +}
> > > +
> > > +static MemTxResult iopmp_block_write(void *opaque, hwaddr addr, uint64_t value,
> > > + unsigned size, MemTxAttrs attrs)
> > > +{
> > > + return iopmp_handle_block(opaque, addr, &value, size, attrs,
> > > + IOPMP_ACCESS_WRITE);
> > > +}
> > > +
> > > +static MemTxResult iopmp_block_read(void *opaque, hwaddr addr, uint64_t *pdata,
> > > + unsigned size, MemTxAttrs attrs)
> > > +{
> > > + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> > > + IOPMP_ACCESS_READ);
> > > +}
> > > +
> > > +static MemTxResult iopmp_block_fetch(void *opaque, hwaddr addr, uint64_t *pdata,
> > > + unsigned size, MemTxAttrs attrs)
> > > +{
> > > + return iopmp_handle_block(opaque, addr, pdata, size, attrs,
> > > + IOPMP_ACCESS_FETCH);
> > > +}
> > > +
> > > +static const MemoryRegionOps iopmp_block_rw_ops = {
> > > + .fetch_with_attrs = iopmp_permssion_read,
> > > + .read_with_attrs = iopmp_block_read,
> > > + .write_with_attrs = iopmp_block_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > > +};
> > > +
> > > +static const MemoryRegionOps iopmp_block_w_ops = {
> > > + .fetch_with_attrs = iopmp_permssion_read,
> > > + .read_with_attrs = iopmp_permssion_read,
> > > + .write_with_attrs = iopmp_block_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > > +};
> > > +
> > > +static const MemoryRegionOps iopmp_block_r_ops = {
> > > + .fetch_with_attrs = iopmp_permssion_read,
> > > + .read_with_attrs = iopmp_block_read,
> > > + .write_with_attrs = iopmp_permssion_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > > +};
> > > +
> > > +static const MemoryRegionOps iopmp_block_rwx_ops = {
> > > + .fetch_with_attrs = iopmp_block_fetch,
> > > + .read_with_attrs = iopmp_block_read,
> > > + .write_with_attrs = iopmp_block_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > > +};
> > > +
> > > +static const MemoryRegionOps iopmp_block_wx_ops = {
> > > + .fetch_with_attrs = iopmp_block_fetch,
> > > + .read_with_attrs = iopmp_permssion_read,
> > > + .write_with_attrs = iopmp_block_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > > +};
> > > +
> > > +static const MemoryRegionOps iopmp_block_rx_ops = {
> > > + .fetch_with_attrs = iopmp_block_fetch,
> > > + .read_with_attrs = iopmp_block_read,
> > > + .write_with_attrs = iopmp_permssion_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > > +};
> > > +
> > > +static const MemoryRegionOps iopmp_block_x_ops = {
> > > + .fetch_with_attrs = iopmp_block_fetch,
> > > + .read_with_attrs = iopmp_permssion_read,
> > > + .write_with_attrs = iopmp_permssion_write,
> > > + .endianness = DEVICE_NATIVE_ENDIAN,
> > > + .valid = {.min_access_size = 1, .max_access_size = 8},
> > > +};
> > > +
> > > +static void iopmp_realize(DeviceState *dev, Error **errp)
> > > +{
> > > + Object *obj = OBJECT(dev);
> > > + SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
> > > + IopmpState *s = IOPMP(dev);
> > > + uint64_t size;
> > > +
> > > + size = -1ULL;
> > > + s->model = IOPMP_MODEL_RAPIDK;
> >
> > Should this be a property to allow other models in the future?
>
> Yes, it will be refined in next revision.
>
> >
> > > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> > > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> > > +
> > > + s->prient_prog = s->default_prient_prog;
> > > + s->rrid_num = MIN(s->rrid_num, IOPMP_MAX_RRID_NUM);
> > > + s->md_num = MIN(s->md_num, IOPMP_MAX_MD_NUM);
> > > + s->entry_num = s->md_num * s->k;
> > > + s->prio_entry = MIN(s->prio_entry, s->entry_num);
> > > +
> > > + s->regs.mdcfg = g_malloc0(s->md_num * sizeof(uint32_t));
> > > + s->regs.mdcfg[0] = s->k;
> > > +
> > > + s->regs.srcmd_en = g_malloc0(s->rrid_num * sizeof(uint32_t));
> > > + s->regs.srcmd_enh = g_malloc0(s->rrid_num * sizeof(uint32_t));
> > > + s->regs.entry = g_malloc0(s->entry_num * sizeof(iopmp_entry_t));
> > > + s->entry_addr = g_malloc0(s->entry_num * sizeof(iopmp_addr_t));
> > > + s->transaction_state = g_malloc0(s->rrid_num *
> > > + sizeof(iopmp_transaction_state));
> > > + qemu_mutex_init(&s->iopmp_transaction_mutex);
> > > +
> > > + memory_region_init_iommu(&s->iommu, sizeof(s->iommu),
> > > + TYPE_IOPMP_IOMMU_MEMORY_REGION,
> > > + obj, "riscv-iopmp-sysbus-iommu", UINT64_MAX);
> > > + memory_region_init_io(&s->mmio, obj, &iopmp_ops,
> > > + s, "iopmp-regs", 0x100000);
> > > + sysbus_init_mmio(sbd, &s->mmio);
> > > +
> > > + memory_region_init_io(&s->blocked_rw, NULL, &iopmp_block_rw_ops,
> > > + s, "iopmp-blocked-rw", size);
> > > + memory_region_init_io(&s->blocked_w, NULL, &iopmp_block_w_ops,
> > > + s, "iopmp-blocked-w", size);
> > > + memory_region_init_io(&s->blocked_r, NULL, &iopmp_block_r_ops,
> > > + s, "iopmp-blocked-r", size);
> > > +
> > > + memory_region_init_io(&s->blocked_rwx, NULL, &iopmp_block_rwx_ops,
> > > + s, "iopmp-blocked-rwx", size);
> > > + memory_region_init_io(&s->blocked_wx, NULL, &iopmp_block_wx_ops,
> > > + s, "iopmp-blocked-wx", size);
> > > + memory_region_init_io(&s->blocked_rx, NULL, &iopmp_block_rx_ops,
> > > + s, "iopmp-blocked-rx", size);
> > > + memory_region_init_io(&s->blocked_x, NULL, &iopmp_block_x_ops,
> > > + s, "iopmp-blocked-x", size);
> > > + address_space_init(&s->blocked_rw_as, &s->blocked_rw,
> > > + "iopmp-blocked-rw-as");
> > > + address_space_init(&s->blocked_w_as, &s->blocked_w,
> > > + "iopmp-blocked-w-as");
> > > + address_space_init(&s->blocked_r_as, &s->blocked_r,
> > > + "iopmp-blocked-r-as");
> > > +
> > > + address_space_init(&s->blocked_rwx_as, &s->blocked_rwx,
> > > + "iopmp-blocked-rwx-as");
> > > + address_space_init(&s->blocked_wx_as, &s->blocked_wx,
> > > + "iopmp-blocked-wx-as");
> > > + address_space_init(&s->blocked_rx_as, &s->blocked_rx,
> > > + "iopmp-blocked-rx-as");
> > > + address_space_init(&s->blocked_x_as, &s->blocked_x,
> > > + "iopmp-blocked-x-as");
> > > +}
> > > +
> > > +static void iopmp_reset(DeviceState *dev)
> > > +{
> > > + IopmpState *s = IOPMP(dev);
> > > +
> > > + qemu_set_irq(s->irq, 0);
> > > + memset(s->regs.srcmd_en, 0, s->rrid_num * sizeof(uint32_t));
> > > + memset(s->regs.srcmd_enh, 0, s->rrid_num * sizeof(uint32_t));
> > > + memset(s->entry_addr, 0, s->entry_num * sizeof(iopmp_addr_t));
> > > +
> > > + s->regs.mdlck = 0;
> > > + s->regs.mdlckh = 0;
> > > + s->regs.entrylck = 0;
> > > + s->regs.mdstall = 0;
> > > + s->regs.mdstallh = 0;
> > > + s->regs.rridscp = 0;
> > > + s->regs.err_cfg = 0;
> > > + s->regs.err_reqaddr = 0;
> > > + s->regs.err_reqid = 0;
> > > + s->regs.err_reqinfo = 0;
> > > +
> > > + s->prient_prog = s->default_prient_prog;
> > > + s->enable = 0;
> > > +
> > > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, s->md_num);
> > > + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, 1);
> > > + s->regs.mdcfg[0] = s->k;
> > > +}
> > > +
> > > +static int iopmp_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
> > > +{
> > > + return attrs.requester_id;
> > > +}
> > > +
> > > +static void iopmp_iommu_memory_region_class_init(ObjectClass *klass, void *data)
> > > +{
> > > + IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
> > > +
> > > + imrc->translate = iopmp_translate;
> > > + imrc->attrs_to_index = iopmp_attrs_to_index;
> > > +}
> > > +
> > > +static Property iopmp_property[] = {
> > > + DEFINE_PROP_BOOL("prient_prog", IopmpState, default_prient_prog, true),
> > > + DEFINE_PROP_UINT32("k", IopmpState, k, 6),
> > > + DEFINE_PROP_UINT32("prio_entry", IopmpState, prio_entry, 48),
> > > + DEFINE_PROP_UINT32("rrid_num", IopmpState, rrid_num, 16),
> > > + DEFINE_PROP_UINT32("md_num", IopmpState, md_num, 8),
> > > + DEFINE_PROP_UINT32("entry_offset", IopmpState, entry_offset, 0x4000),
> > > + DEFINE_PROP_UINT32("fabricated_v", IopmpState, fabricated_v, 0x0),
> > > + DEFINE_PROP_END_OF_LIST(),
> > > +};
> > > +
> > > +static void iopmp_class_init(ObjectClass *klass, void *data)
> > > +{
> > > + DeviceClass *dc = DEVICE_CLASS(klass);
> > > + device_class_set_props(dc, iopmp_property);
> > > + dc->realize = iopmp_realize;
> > > + dc->reset = iopmp_reset;
> > > +}
> > > +
> > > +static void iopmp_init(Object *obj)
> > > +{
> > > + IopmpState *s = IOPMP(obj);
> > > + SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
> > > +
> > > + sysbus_init_irq(sbd, &s->irq);
> > > +}
> > > +
> > > +static const TypeInfo iopmp_info = {
> > > + .name = TYPE_IOPMP,
> > > + .parent = TYPE_SYS_BUS_DEVICE,
> > > + .instance_size = sizeof(IopmpState),
> > > + .instance_init = iopmp_init,
> > > + .class_init = iopmp_class_init,
> > > +};
> > > +
> > > +static const TypeInfo
> > > +iopmp_iommu_memory_region_info = {
> > > + .name = TYPE_IOPMP_IOMMU_MEMORY_REGION,
> > > + .parent = TYPE_IOMMU_MEMORY_REGION,
> > > + .class_init = iopmp_iommu_memory_region_class_init,
> > > +};
> > > +
> > > +static void
> > > +iopmp_register_types(void)
> > > +{
> > > + type_register_static(&iopmp_info);
> > > + type_register_static(&iopmp_iommu_memory_region_info);
> > > +}
> > > +
> > > +type_init(iopmp_register_types);
> > > diff --git a/hw/misc/trace-events b/hw/misc/trace-events
> > > index 1be0717c0c..c148166d2d 100644
> > > --- a/hw/misc/trace-events
> > > +++ b/hw/misc/trace-events
> > > @@ -362,3 +362,6 @@ aspeed_sli_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx
> > > aspeed_sliio_write(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
> > > aspeed_sliio_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
> > >
> > > +# riscv_iopmp.c
> > > +iopmp_read(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> > > +iopmp_write(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x"
> > > diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h
> > > new file mode 100644
> > > index 0000000000..b8fe479108
> > > --- /dev/null
> > > +++ b/include/hw/misc/riscv_iopmp.h
> > > @@ -0,0 +1,168 @@
> > > +/*
> > > + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection)
> > > + *
> > > + * Copyright (c) 2023-2024 Andes Tech. Corp.
> > > + *
> > > + * SPDX-License-Identifier: GPL-2.0-or-later
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify it
> > > + * under the terms and conditions of the GNU General Public License,
> > > + * version 2 or later, as published by the Free Software Foundation.
> > > + *
> > > + * This program is distributed in the hope it will be useful, but WITHOUT
> > > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> > > + * more details.
> > > + *
> > > + * You should have received a copy of the GNU General Public License along with
> > > + * this program. If not, see <http://www.gnu.org/licenses/>.
> > > + */
> > > +
> > > +#ifndef RISCV_IOPMP_H
> > > +#define RISCV_IOPMP_H
> > > +
> > > +#include "hw/sysbus.h"
> > > +#include "qemu/typedefs.h"
> > > +#include "memory.h"
> > > +#include "exec/hwaddr.h"
> > > +
> > > +#define TYPE_IOPMP "iopmp"
> > > +#define IOPMP(obj) OBJECT_CHECK(IopmpState, (obj), TYPE_IOPMP)
> > > +
> > > +#define IOPMP_MAX_MD_NUM 63
> > > +#define IOPMP_MAX_RRID_NUM 65535
> > > +#define IOPMP_MAX_ENTRY_NUM 65535
> > > +
> > > +#define VENDER_VIRT 0
> > > +#define SPECVER_0_9_1 91
> > > +#define IMPID_0_9_1 91
> > > +
> > > +#define RRE_ERROR 0
> > > +#define RRE_SUCCESS_VALUE 1
> > > +
> > > +#define RWE_ERROR 0
> > > +#define RWE_SUCCESS 1
> > > +
> > > +#define RXE_ERROR 0
> > > +#define RXE_SUCCESS_VALUE 1
> > > +
> > > +#define ERR_REQINFO_TTYPE_READ 1
> > > +#define ERR_REQINFO_TTYPE_WRITE 2
> > > +#define ERR_REQINFO_TTYPE_FETCH 3
> > > +#define ERR_REQINFO_ETYPE_NOERROR 0
> > > +#define ERR_REQINFO_ETYPE_READ 1
> > > +#define ERR_REQINFO_ETYPE_WRITE 2
> > > +#define ERR_REQINFO_ETYPE_FETCH 3
> > > +#define ERR_REQINFO_ETYPE_PARHIT 4
> > > +#define ERR_REQINFO_ETYPE_NOHIT 5
> > > +#define ERR_REQINFO_ETYPE_RRID 6
> > > +#define ERR_REQINFO_ETYPE_USER 7
> > > +
> > > +#define IOPMP_MODEL_FULL 0
> > > +#define IOPMP_MODEL_RAPIDK 0x1
> > > +#define IOPMP_MODEL_DYNAMICK 0x2
> > > +#define IOPMP_MODEL_ISOLATION 0x3
> > > +#define IOPMP_MODEL_COMPACTK 0x4
> > > +
> > > +#define ENTRY_NO_HIT 0
> > > +#define ENTRY_PAR_HIT 1
> > > +#define ENTRY_HIT 2
> >
> > Why not an enum?
> >
>
> Thank you for your suggestion. There will be enums in next version.
>
> Thanks,
> Ethan Chen
>
> > Alistair
> >
> > > +
> > > +/* The generic iopmp address space which downstream is system memory */
> > > +extern AddressSpace iopmp_container_as;
> > > +
> > > +typedef enum {
> > > + IOPMP_AMATCH_OFF, /* Null (off) */
> > > + IOPMP_AMATCH_TOR, /* Top of Range */
> > > + IOPMP_AMATCH_NA4, /* Naturally aligned four-byte region */
> > > + IOPMP_AMATCH_NAPOT /* Naturally aligned power-of-two region */
> > > +} iopmp_am_t;
> > > +
> > > +typedef enum {
> > > + IOPMP_ACCESS_READ = 1,
> > > + IOPMP_ACCESS_WRITE = 2,
> > > + IOPMP_ACCESS_FETCH = 3
> > > +} iopmp_access_type;
> > > +
> > > +typedef enum {
> > > + IOPMP_NONE = 0,
> > > + IOPMP_RO = 1,
> > > + IOPMP_WO = 2,
> > > + IOPMP_RW = 3,
> > > + IOPMP_XO = 4,
> > > + IOPMP_RX = 5,
> > > + IOPMP_WX = 6,
> > > + IOPMP_RWX = 7,
> > > +} iopmp_permission;
> > > +
> > > +typedef struct {
> > > + uint32_t addr_reg;
> > > + uint32_t addrh_reg;
> > > + uint32_t cfg_reg;
> > > +} iopmp_entry_t;
> > > +
> > > +typedef struct {
> > > + uint64_t sa;
> > > + uint64_t ea;
> > > +} iopmp_addr_t;
> > > +
> > > +typedef struct {
> > > + uint32_t *srcmd_en;
> > > + uint32_t *srcmd_enh;
> > > + uint32_t *mdcfg;
> > > + iopmp_entry_t *entry;
> > > + uint32_t mdlck;
> > > + uint32_t mdlckh;
> > > + uint32_t entrylck;
> > > + uint32_t mdcfglck;
> > > + uint32_t mdstall;
> > > + uint32_t mdstallh;
> > > + uint32_t rridscp;
> > > + uint32_t err_cfg;
> > > + uint64_t err_reqaddr;
> > > + uint32_t err_reqid;
> > > + uint32_t err_reqinfo;
> > > +} iopmp_regs;
> > > +
> > > +
> > > +/* To detect partially hit */
> > > +typedef struct iopmp_transaction_state {
> > > + bool running;
> > > + bool supported;
> > > + hwaddr start_addr;
> > > + hwaddr end_addr;
> > > +} iopmp_transaction_state;
> > > +
> > > +typedef struct IopmpState {
> > > + SysBusDevice parent_obj;
> > > + iopmp_addr_t *entry_addr;
> > > + MemoryRegion mmio;
> > > + IOMMUMemoryRegion iommu;
> > > + IOMMUMemoryRegion *next_iommu;
> > > + iopmp_regs regs;
> > > + MemoryRegion *downstream;
> > > + MemoryRegion blocked_r, blocked_w, blocked_x, blocked_rw, blocked_rx,
> > > + blocked_wx, blocked_rwx;
> > > + MemoryRegion stall_io;
> > > + uint32_t model;
> > > + uint32_t k;
> > > + bool prient_prog;
> > > + bool default_prient_prog;
> > > + iopmp_transaction_state *transaction_state;
> > > + QemuMutex iopmp_transaction_mutex;
> > > +
> > > + AddressSpace downstream_as;
> > > + AddressSpace blocked_r_as, blocked_w_as, blocked_x_as, blocked_rw_as,
> > > + blocked_rx_as, blocked_wx_as, blocked_rwx_as;
> > > + qemu_irq irq;
> > > + bool enable;
> > > +
> > > + uint32_t prio_entry;
> > > + uint32_t rrid_num;
> > > + uint32_t md_num;
> > > + uint32_t entry_num;
> > > + uint32_t entry_offset;
> > > + uint32_t fabricated_v;
> > > +} IopmpState;
> > > +
> > > +#endif
> > > --
> > > 2.34.1
> > >
> > >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory
2024-08-09 10:11 ` Ethan Chen via
@ 2024-08-12 0:47 ` Alistair Francis
2024-08-12 2:44 ` Ethan Chen via
0 siblings, 1 reply; 27+ messages in thread
From: Alistair Francis @ 2024-08-12 0:47 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Fri, Aug 9, 2024 at 8:11 PM Ethan Chen <ethan84@andestech.com> wrote:
>
> On Thu, Aug 08, 2024 at 02:23:56PM +1000, Alistair Francis wrote:
> >
> > On Mon, Jul 15, 2024 at 8:13 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> > >
> > > To enable system memory transactions through the IOPMP, memory regions must
> > > be moved to the IOPMP downstream and then replaced with IOMMUs for IOPMP
> > > translation.
> > >
> > > The iopmp_setup_system_memory() function copies subregions of system memory
> > > to create the IOPMP downstream and then replaces the specified memory
> > > regions in system memory with the IOMMU regions of the IOPMP. It also
> > > adds entries to a protection map that records the relationship between
> > > physical address regions and the IOPMP, which is used by the IOPMP DMA
> > > API to send transaction information.
> > >
> > > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > > ---
> > > hw/misc/riscv_iopmp.c | 61 +++++++++++++++++++++++++++++++++++
> > > include/hw/misc/riscv_iopmp.h | 3 ++
> > > 2 files changed, 64 insertions(+)
> > >
> > > diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> > > index db43e3c73f..e62ac57437 100644
> > > --- a/hw/misc/riscv_iopmp.c
> > > +++ b/hw/misc/riscv_iopmp.c
> > > @@ -1151,4 +1151,65 @@ iopmp_register_types(void)
> > > type_register_static(&iopmp_iommu_memory_region_info);
> > > }
> > >
> > > +/*
> > > + * Copies subregions from the source memory region to the destination memory
> > > + * region
> > > + */
> > > +static void copy_memory_subregions(MemoryRegion *src_mr, MemoryRegion *dst_mr)
Maybe `alias_memory_subregions()` or `link_memory_subregions()`
instead of `copy_memory_subregions()`.
> > > +{
> > > + int32_t priority;
> > > + hwaddr addr;
> > > + MemoryRegion *alias, *subregion;
> > > + QTAILQ_FOREACH(subregion, &src_mr->subregions, subregions_link) {
> > > + priority = subregion->priority;
> > > + addr = subregion->addr;
> > > + alias = g_malloc0(sizeof(MemoryRegion));
> > > + memory_region_init_alias(alias, NULL, subregion->name, subregion, 0,
> > > + memory_region_size(subregion));
> > > + memory_region_add_subregion_overlap(dst_mr, addr, alias, priority);
> > > + }
> > > +}
> >
> > This seems strange. Do we really need to do this?
> >
> > I haven't looked at the memory_region stuff for awhile, but this seems
> > clunky and prone to breakage.
> >
> > We already link s->iommu with the system memory
> >
>
> s->iommu occupies the address of the protected devices in system
> memory. Since IOPMP does not alter address, the target address space
> must differ from system memory to avoid infinite recursive iommu access.
>
> The transaction will be redirected to a downstream memory region, which
> is almost identical to system memory but without the iommu memory
> region of IOPMP.
>
> This function serves as a helper to create that downstream memory region.
What I don't understand is that we already have target_mr as a
subregion of downstream, is that not enough?
Alistair
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support
2024-08-09 10:14 ` Ethan Chen via
@ 2024-08-12 0:48 ` Alistair Francis
2024-08-12 2:55 ` Ethan Chen via
0 siblings, 1 reply; 27+ messages in thread
From: Alistair Francis @ 2024-08-12 0:48 UTC (permalink / raw)
To: Ethan Chen
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Fri, Aug 9, 2024 at 8:14 PM Ethan Chen <ethan84@andestech.com> wrote:
>
> On Thu, Aug 08, 2024 at 02:01:13PM +1000, Alistair Francis wrote:
> >
> > On Mon, Jul 15, 2024 at 8:15 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> > >
> > > - Add 'iopmp=on' option to enable IOPMP. It adds an iopmp device virt machine
> > > to protect all regions of system memory, and configures RRID of CPU.
> > >
> > > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > > ---
> > > docs/system/riscv/virt.rst | 5 +++
> > > hw/riscv/Kconfig | 1 +
> > > hw/riscv/virt.c | 63 ++++++++++++++++++++++++++++++++++++++
> > > include/hw/riscv/virt.h | 5 ++-
> > > 4 files changed, 73 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
> > > index 9a06f95a34..9fd006ccc2 100644
> > > --- a/docs/system/riscv/virt.rst
> > > +++ b/docs/system/riscv/virt.rst
> > > @@ -116,6 +116,11 @@ The following machine-specific options are supported:
> > > having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
> > > the default number of per-HART VS-level AIA IMSIC pages is 0.
> > >
> > > +- iopmp=[on|off]
> > > +
> > > + When this option is "on", an IOPMP device is added to machine. IOPMP checks
> > > + memory transcations in system memory. This option is assumed to be "off".
> >
> > We probably should have a a little more here. You don't even mention
> > that this is the rapid-k model.
>
> I'll provide more details.
>
> >
> > It might be worth adding a `model` field, to make it easier to add
> > other models in the future. Thoughts?
> >
>
> I think the IOPMP model should be a device property and not
> configured here.
It should be a device property, but then how does a user configure
that? I guess users can globally set device props, but it's a bit
clunky
Alistair
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory
2024-08-12 0:47 ` Alistair Francis
@ 2024-08-12 2:44 ` Ethan Chen via
0 siblings, 0 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-08-12 2:44 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Aug 12, 2024 at 10:47:33AM +1000, Alistair Francis wrote:
> [EXTERNAL MAIL]
>
> On Fri, Aug 9, 2024 at 8:11 PM Ethan Chen <ethan84@andestech.com> wrote:
> >
> > On Thu, Aug 08, 2024 at 02:23:56PM +1000, Alistair Francis wrote:
> > >
> > > On Mon, Jul 15, 2024 at 8:13 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> > > >
> > > > To enable system memory transactions through the IOPMP, memory regions must
> > > > be moved to the IOPMP downstream and then replaced with IOMMUs for IOPMP
> > > > translation.
> > > >
> > > > The iopmp_setup_system_memory() function copies subregions of system memory
> > > > to create the IOPMP downstream and then replaces the specified memory
> > > > regions in system memory with the IOMMU regions of the IOPMP. It also
> > > > adds entries to a protection map that records the relationship between
> > > > physical address regions and the IOPMP, which is used by the IOPMP DMA
> > > > API to send transaction information.
> > > >
> > > > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > > > ---
> > > > hw/misc/riscv_iopmp.c | 61 +++++++++++++++++++++++++++++++++++
> > > > include/hw/misc/riscv_iopmp.h | 3 ++
> > > > 2 files changed, 64 insertions(+)
> > > >
> > > > diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c
> > > > index db43e3c73f..e62ac57437 100644
> > > > --- a/hw/misc/riscv_iopmp.c
> > > > +++ b/hw/misc/riscv_iopmp.c
> > > > @@ -1151,4 +1151,65 @@ iopmp_register_types(void)
> > > > type_register_static(&iopmp_iommu_memory_region_info);
> > > > }
> > > >
> > > > +/*
> > > > + * Copies subregions from the source memory region to the destination memory
> > > > + * region
> > > > + */
> > > > +static void copy_memory_subregions(MemoryRegion *src_mr, MemoryRegion *dst_mr)
>
> Maybe `alias_memory_subregions()` or `link_memory_subregions()`
> instead of `copy_memory_subregions()`.
Thanks for the suggestion. I will clarify it.
>
> > > > +{
> > > > + int32_t priority;
> > > > + hwaddr addr;
> > > > + MemoryRegion *alias, *subregion;
> > > > + QTAILQ_FOREACH(subregion, &src_mr->subregions, subregions_link) {
> > > > + priority = subregion->priority;
> > > > + addr = subregion->addr;
> > > > + alias = g_malloc0(sizeof(MemoryRegion));
> > > > + memory_region_init_alias(alias, NULL, subregion->name, subregion, 0,
> > > > + memory_region_size(subregion));
> > > > + memory_region_add_subregion_overlap(dst_mr, addr, alias, priority);
> > > > + }
> > > > +}
> > >
> > > This seems strange. Do we really need to do this?
> > >
> > > I haven't looked at the memory_region stuff for awhile, but this seems
> > > clunky and prone to breakage.
> > >
> > > We already link s->iommu with the system memory
> > >
> >
> > s->iommu occupies the address of the protected devices in system
> > memory. Since IOPMP does not alter address, the target address space
> > must differ from system memory to avoid infinite recursive iommu access.
> >
> > The transaction will be redirected to a downstream memory region, which
> > is almost identical to system memory but without the iommu memory
> > region of IOPMP.
> >
> > This function serves as a helper to create that downstream memory region.
>
> What I don't understand is that we already have target_mr as a
> subregion of downstream, is that not enough?
>
Did you mean the target_mr in iopmp_setup_system_memory? It specifies
the container that IOPMP wants to protect. We don't create
separate iommus for each subregion. We create a single iommu for the
entire container (system memory).
The access to target_mr (system memory) which has iommu region of
IOPMP, will be translated to downstream which has protected device
regions.
Thanks,
Ethan Chen
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support
2024-08-12 0:48 ` Alistair Francis
@ 2024-08-12 2:55 ` Ethan Chen via
0 siblings, 0 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-08-12 2:55 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, dbarboza,
zhiwei_liu, qemu-riscv
On Mon, Aug 12, 2024 at 10:48:40AM +1000, Alistair Francis wrote:
> [EXTERNAL MAIL]
>
> On Fri, Aug 9, 2024 at 8:14 PM Ethan Chen <ethan84@andestech.com> wrote:
> >
> > On Thu, Aug 08, 2024 at 02:01:13PM +1000, Alistair Francis wrote:
> > >
> > > On Mon, Jul 15, 2024 at 8:15 PM Ethan Chen via <qemu-devel@nongnu.org> wrote:
> > > >
> > > > - Add 'iopmp=on' option to enable IOPMP. It adds an iopmp device virt machine
> > > > to protect all regions of system memory, and configures RRID of CPU.
> > > >
> > > > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > > > ---
> > > > docs/system/riscv/virt.rst | 5 +++
> > > > hw/riscv/Kconfig | 1 +
> > > > hw/riscv/virt.c | 63 ++++++++++++++++++++++++++++++++++++++
> > > > include/hw/riscv/virt.h | 5 ++-
> > > > 4 files changed, 73 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
> > > > index 9a06f95a34..9fd006ccc2 100644
> > > > --- a/docs/system/riscv/virt.rst
> > > > +++ b/docs/system/riscv/virt.rst
> > > > @@ -116,6 +116,11 @@ The following machine-specific options are supported:
> > > > having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
> > > > the default number of per-HART VS-level AIA IMSIC pages is 0.
> > > >
> > > > +- iopmp=[on|off]
> > > > +
> > > > + When this option is "on", an IOPMP device is added to machine. IOPMP checks
> > > > + memory transcations in system memory. This option is assumed to be "off".
> > >
> > > We probably should have a a little more here. You don't even mention
> > > that this is the rapid-k model.
> >
> > I'll provide more details.
> >
> > >
> > > It might be worth adding a `model` field, to make it easier to add
> > > other models in the future. Thoughts?
> > >
> >
> > I think the IOPMP model should be a device property and not
> > configured here.
>
> It should be a device property, but then how does a user configure
> that? I guess users can globally set device props, but it's a bit
> clunky
>
Because IOPMP has a lot props, I think it is better to configure them
through global device props instead of machine option.
Thanks,
Ethan Chen
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 0/8] Support RISC-V IOPMP
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
` (7 preceding siblings ...)
2024-07-15 10:14 ` [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support Ethan Chen via
@ 2024-11-05 18:36 ` Daniel Henrique Barboza
2024-11-08 1:16 ` Ethan Chen via
8 siblings, 1 reply; 27+ messages in thread
From: Daniel Henrique Barboza @ 2024-11-05 18:36 UTC (permalink / raw)
To: Ethan Chen, qemu-devel
Cc: richard.henderson, pbonzini, peterx, david, philmd, palmer,
alistair.francis, bmeng.cn, liwei1518, zhiwei_liu, qemu-riscv
Hi Ethan,
Do you plan to send a new version of this work? It seems to me that we're
a couple of reviews away from getting it merged.
Thanks,
Daniel
On 7/15/24 6:56 AM, Ethan Chen wrote:
> This series implements basic functions of IOPMP specification v0.9.1 rapid-k
> model.
> The specification url:
> https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
>
> When IOPMP is enabled, memory access to system memory from devices and
> the CPU will be checked by the IOPMP.
>
> The issue of CPU access to non-CPU address space via IOMMU was previously
> mentioned by Jim Shu, who provided a patch[1] to fix it. IOPMP also requires
> this patch.
>
> [1] accel/tcg: Store section pointer in CPUTLBEntryFull
> https://patchew.org/QEMU/20240612081416.29704-1-jim.shu@sifive.com/20240612081416.29704-2-jim.shu@sifive.com/
>
>
> Changes for v8:
>
> - Support transactions from CPU
> - Add an API to set up IOPMP protection for system memory
> - Add an API to configure the RISCV CPU to support IOPMP and specify the
> CPU's RRID
> - Add an API for DMA operation with IOPMP support
> - Add SPDX license identifiers to new files (Stefan W.)
> - Remove IOPMP PCI interface(pci_setup_iommu) (Zhiwei)
>
> Changes for v7:
>
> - Change the specification version to v0.9.1
> - Remove the sps extension
> - Remove stall support, transaction information which need requestor device
> support.
> - Remove iopmp_cascade option for virt machine
> - Refine 'addr' range checks switch case (Daniel)
>
> Ethan Chen (8):
> memory: Introduce memory region fetch operation
> system/physmem: Support IOMMU granularity smaller than TARGET_PAGE
> size
> target/riscv: Add support for IOPMP
> hw/misc/riscv_iopmp: Add RISC-V IOPMP device
> hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system
> memory
> hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support
> hw/misc/riscv_iopmp: Add DMA operation with IOPMP support API
> hw/riscv/virt: Add IOPMP support
>
> accel/tcg/cputlb.c | 29 +-
> docs/system/riscv/virt.rst | 5 +
> hw/misc/Kconfig | 3 +
> hw/misc/meson.build | 1 +
> hw/misc/riscv_iopmp.c | 1289 +++++++++++++++++++++++++++++++++
> hw/misc/trace-events | 3 +
> hw/riscv/Kconfig | 1 +
> hw/riscv/virt.c | 63 ++
> include/exec/memory.h | 30 +
> include/hw/misc/riscv_iopmp.h | 173 +++++
> include/hw/riscv/virt.h | 5 +-
> system/memory.c | 104 +++
> system/physmem.c | 4 +
> system/trace-events | 2 +
> target/riscv/cpu_cfg.h | 2 +
> target/riscv/cpu_helper.c | 18 +-
> 16 files changed, 1722 insertions(+), 10 deletions(-)
> create mode 100644 hw/misc/riscv_iopmp.c
> create mode 100644 include/hw/misc/riscv_iopmp.h
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 0/8] Support RISC-V IOPMP
2024-11-05 18:36 ` [PATCH v8 0/8] Support RISC-V IOPMP Daniel Henrique Barboza
@ 2024-11-08 1:16 ` Ethan Chen via
0 siblings, 0 replies; 27+ messages in thread
From: Ethan Chen via @ 2024-11-08 1:16 UTC (permalink / raw)
To: Daniel Henrique Barboza
Cc: qemu-devel, richard.henderson, pbonzini, peterx, david, philmd,
palmer, alistair.francis, bmeng.cn, liwei1518, zhiwei_liu,
qemu-riscv
On Tue, Nov 05, 2024 at 03:36:07PM -0300, Daniel Henrique Barboza wrote:
> [EXTERNAL MAIL]
>
> Hi Ethan,
>
>
> Do you plan to send a new version of this work? It seems to me that we're
> a couple of reviews away from getting it merged.
>
Hi Daniel,
Thanks for checking in! I do plan to send an updated version, but it may take a
bit more time.
Best regards,
Ethan
>
> On 7/15/24 6:56 AM, Ethan Chen wrote:
> > This series implements basic functions of IOPMP specification v0.9.1 rapid-k
> > model.
> > The specification url:
> > https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.1
> >
> > When IOPMP is enabled, memory access to system memory from devices and
> > the CPU will be checked by the IOPMP.
> >
> > The issue of CPU access to non-CPU address space via IOMMU was previously
> > mentioned by Jim Shu, who provided a patch[1] to fix it. IOPMP also requires
> > this patch.
> >
> > [1] accel/tcg: Store section pointer in CPUTLBEntryFull
> > https://patchew.org/QEMU/20240612081416.29704-1-jim.shu@sifive.com/20240612081416.29704-2-jim.shu@sifive.com/
> >
> >
> > Changes for v8:
> >
> > - Support transactions from CPU
> > - Add an API to set up IOPMP protection for system memory
> > - Add an API to configure the RISCV CPU to support IOPMP and specify the
> > CPU's RRID
> > - Add an API for DMA operation with IOPMP support
> > - Add SPDX license identifiers to new files (Stefan W.)
> > - Remove IOPMP PCI interface(pci_setup_iommu) (Zhiwei)
> >
> > Changes for v7:
> >
> > - Change the specification version to v0.9.1
> > - Remove the sps extension
> > - Remove stall support, transaction information which need requestor device
> > support.
> > - Remove iopmp_cascade option for virt machine
> > - Refine 'addr' range checks switch case (Daniel)
> >
> > Ethan Chen (8):
> > memory: Introduce memory region fetch operation
> > system/physmem: Support IOMMU granularity smaller than TARGET_PAGE
> > size
> > target/riscv: Add support for IOPMP
> > hw/misc/riscv_iopmp: Add RISC-V IOPMP device
> > hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system
> > memory
> > hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support
> > hw/misc/riscv_iopmp: Add DMA operation with IOPMP support API
> > hw/riscv/virt: Add IOPMP support
> >
> > accel/tcg/cputlb.c | 29 +-
> > docs/system/riscv/virt.rst | 5 +
> > hw/misc/Kconfig | 3 +
> > hw/misc/meson.build | 1 +
> > hw/misc/riscv_iopmp.c | 1289 +++++++++++++++++++++++++++++++++
> > hw/misc/trace-events | 3 +
> > hw/riscv/Kconfig | 1 +
> > hw/riscv/virt.c | 63 ++
> > include/exec/memory.h | 30 +
> > include/hw/misc/riscv_iopmp.h | 173 +++++
> > include/hw/riscv/virt.h | 5 +-
> > system/memory.c | 104 +++
> > system/physmem.c | 4 +
> > system/trace-events | 2 +
> > target/riscv/cpu_cfg.h | 2 +
> > target/riscv/cpu_helper.c | 18 +-
> > 16 files changed, 1722 insertions(+), 10 deletions(-)
> > create mode 100644 hw/misc/riscv_iopmp.c
> > create mode 100644 include/hw/misc/riscv_iopmp.h
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2024-11-08 1:48 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-15 9:56 [PATCH v8 0/8] Support RISC-V IOPMP Ethan Chen via
2024-07-15 9:56 ` [PATCH v8 1/8] memory: Introduce memory region fetch operation Ethan Chen via
2024-07-15 9:56 ` [PATCH v8 2/8] system/physmem: Support IOMMU granularity smaller than TARGET_PAGE size Ethan Chen via
2024-08-08 4:12 ` Alistair Francis
2024-07-15 9:56 ` [PATCH v8 3/8] target/riscv: Add support for IOPMP Ethan Chen via
2024-08-08 4:13 ` Alistair Francis
2024-07-15 9:56 ` [PATCH v8 4/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device Ethan Chen via
2024-08-08 3:56 ` Alistair Francis
2024-08-09 9:42 ` Ethan Chen via
2024-08-12 0:42 ` Alistair Francis
2024-08-09 10:03 ` Ethan Chen via
2024-07-15 10:12 ` [PATCH v8 5/8] hw/misc/riscv_iopmp: Add API to set up IOPMP protection for system memory Ethan Chen via
2024-08-08 4:23 ` Alistair Francis
2024-08-09 10:11 ` Ethan Chen via
2024-08-12 0:47 ` Alistair Francis
2024-08-12 2:44 ` Ethan Chen via
2024-07-15 10:14 ` [PATCH v8 6/8] hw/misc/riscv_iopmp: Add API to configure RISCV CPU IOPMP support Ethan Chen via
2024-08-08 4:25 ` Alistair Francis
2024-08-09 9:56 ` Ethan Chen via
2024-07-15 10:14 ` [PATCH v8 7/8] hw/misc/riscv_iopmp: Add DMA operation with IOPMP support API Ethan Chen via
2024-07-15 10:14 ` [PATCH v8 8/8] hw/riscv/virt: Add IOPMP support Ethan Chen via
2024-08-08 4:01 ` Alistair Francis
2024-08-09 10:14 ` Ethan Chen via
2024-08-12 0:48 ` Alistair Francis
2024-08-12 2:55 ` Ethan Chen via
2024-11-05 18:36 ` [PATCH v8 0/8] Support RISC-V IOPMP Daniel Henrique Barboza
2024-11-08 1:16 ` Ethan Chen via
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).