From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xen.org
Cc: edgar.iglesias@xilinx.com,
Stefano Stabellini <stefanos@xilinx.com>,
julien.grall@arm.com, sstabellini@kernel.org
Subject: [PATCH v3 4/6] xen/arm: zynqmp: eemi access control
Date: Fri, 10 Aug 2018 17:01:48 -0700 [thread overview]
Message-ID: <1533945710-15159-4-git-send-email-sstabellini@kernel.org> (raw)
In-Reply-To: <alpine.DEB.2.10.1808101435481.32304@sstabellini-ThinkPad-X260>
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
From: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce data structs to implement basic access controls.
Introduce the following three functions:
domain_has_node_access: check access to the node
domain_has_reset_access: check access to the reset line
domain_has_mmio_access: check access to the register
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
xen/arch/arm/platforms/xilinx-zynqmp-eemi.c | 783 ++++++++++++++++++++++++++++
1 file changed, 783 insertions(+)
diff --git a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
index c3a19e9..62cc15c 100644
--- a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
+++ b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
@@ -16,6 +16,74 @@
* GNU General Public License for more details.
*/
+/*
+ * EEMI Power Management API access
+ *
+ * Refs:
+ * https://www.xilinx.com/support/documentation/user_guides/ug1200-eemi-api.pdf
+ *
+ * Background:
+ * The ZynqMP has a subsystem named the PMU with a CPU and special devices
+ * dedicated to running Power Management Firmware. Other masters in the
+ * system need to send requests to the PMU in order to for example:
+ * * Manage power state
+ * * Configure clocks
+ * * Program bitstreams for the programmable logic
+ * * etc
+ *
+ * Although the details of the setup are configurable, in the common case
+ * the PMU lives in the Secure world. NS World cannot directly communicate
+ * with it and must use proxy services from ARM Trusted Firmware to reach
+ * the PMU.
+ *
+ * Power Management on the ZynqMP is implemented in a layered manner.
+ * The PMU knows about various masters and will enforce access controls
+ * based on a pre-configured partitioning. This configuration dictates
+ * which devices are owned by the various masters and the PMU FW makes sure
+ * that a given master cannot turn off a device that it does not own or that
+ * is in use by other masters.
+ *
+ * The PMU is not aware of multiple execution states in masters.
+ * For example, it treats the ARMv8 cores as single units and does not
+ * distinguish between Secure vs NS OS's nor does it know about Hypervisors
+ * and multiple guests. It is up to software on the ARMv8 cores to present
+ * a unified view of its power requirements.
+ *
+ * To implement this unified view, ARM Trusted Firmware at EL3 provides
+ * access to the PM API via SMC calls. ARM Trusted Firmware is responsible
+ * for mediating between the Secure and the NS world, rejecting SMC calls
+ * that request changes that are not allowed.
+ *
+ * Xen running above ATF owns the NS world and is responsible for presenting
+ * unified PM requests taking all guests and the hypervisor into account.
+ *
+ * Implementation:
+ * The PM API contains different classes of calls.
+ * Certain calls are harmless to expose to any guest.
+ * These include calls to get the PM API Version, or to read out the version
+ * of the chip we're running on.
+ *
+ * In order to correctly virtualize these calls, we need to know if
+ * guests issuing these calls have ownership of the given device.
+ * The approach taken here is to map PM API Nodes identifying
+ * a device into base addresses for registers that belong to that
+ * same device.
+ *
+ * If the guest has access to devices registers, we give the guest
+ * access to PM API calls that affect that device. This is implemented
+ * by pm_node_access and domain_has_node_access().
+ *
+ * MMIO access:
+ * These calls allow guests to access certain memory ranges. These ranges
+ * are typically protected for secure-world access only and also from
+ * certain masters only, so guests cannot access them directly.
+ * Registers within the memory regions affect certain nodes. In this case,
+ * our input is an address and we map that address into a node. If the
+ * guest has ownership of that node, the access is allowed.
+ * Some registers contain bitfields and a single register may contain
+ * bits that affect multiple nodes.
+ */
+
#include <xen/iocap.h>
#include <xen/sched.h>
#include <xen/types.h>
@@ -23,6 +91,721 @@
#include <asm/regs.h>
#include <asm/platforms/xilinx-zynqmp-eemi.h>
+struct pm_access
+{
+ paddr_t addr;
+ bool hwdom_access; /* HW domain gets access regardless. */
+};
+
+/*
+ * This table maps a node into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the node to grant it access to EEMI calls for that node.
+ */
+static const struct pm_access pm_node_access[] = {
+ /* MM_RPU grants access to all RPU Nodes. */
+ [NODE_RPU] = { MM_RPU },
+ [NODE_RPU_0] = { MM_RPU },
+ [NODE_RPU_1] = { MM_RPU },
+ [NODE_IPI_RPU_0] = { MM_RPU },
+
+ /* GPU nodes. */
+ [NODE_GPU] = { MM_GPU },
+ [NODE_GPU_PP_0] = { MM_GPU },
+ [NODE_GPU_PP_1] = { MM_GPU },
+
+ [NODE_USB_0] = { MM_USB3_0_XHCI },
+ [NODE_USB_1] = { MM_USB3_1_XHCI },
+ [NODE_TTC_0] = { MM_TTC0 },
+ [NODE_TTC_1] = { MM_TTC1 },
+ [NODE_TTC_2] = { MM_TTC2 },
+ [NODE_TTC_3] = { MM_TTC3 },
+ [NODE_SATA] = { MM_SATA_AHCI_HBA },
+ [NODE_ETH_0] = { MM_GEM0 },
+ [NODE_ETH_1] = { MM_GEM1 },
+ [NODE_ETH_2] = { MM_GEM2 },
+ [NODE_ETH_3] = { MM_GEM3 },
+ [NODE_UART_0] = { MM_UART0 },
+ [NODE_UART_1] = { MM_UART1 },
+ [NODE_SPI_0] = { MM_SPI0 },
+ [NODE_SPI_1] = { MM_SPI1 },
+ [NODE_I2C_0] = { MM_I2C0 },
+ [NODE_I2C_1] = { MM_I2C1 },
+ [NODE_SD_0] = { MM_SD0 },
+ [NODE_SD_1] = { MM_SD1 },
+ [NODE_DP] = { MM_DP },
+
+ /* Guest with GDMA Channel 0 gets PM access. Other guests don't. */
+ [NODE_GDMA] = { MM_GDMA_CH0 },
+ /* Guest with ADMA Channel 0 gets PM access. Other guests don't. */
+ [NODE_ADMA] = { MM_ADMA_CH0 },
+
+ [NODE_NAND] = { MM_NAND },
+ [NODE_QSPI] = { MM_QSPI },
+ [NODE_GPIO] = { MM_GPIO },
+ [NODE_CAN_0] = { MM_CAN0 },
+ [NODE_CAN_1] = { MM_CAN1 },
+
+ /* Only for the hardware domain. */
+ [NODE_AFI] = { .hwdom_access = true },
+ [NODE_APLL] = { .hwdom_access = true },
+ [NODE_VPLL] = { .hwdom_access = true },
+ [NODE_DPLL] = { .hwdom_access = true },
+ [NODE_RPLL] = { .hwdom_access = true },
+ [NODE_IOPLL] = { .hwdom_access = true },
+ [NODE_DDR] = { .hwdom_access = true },
+ [NODE_IPI_APU] = { .hwdom_access = true },
+ [NODE_PCAP] = { .hwdom_access = true },
+
+ [NODE_PCIE] = { MM_PCIE_ATTRIB },
+ [NODE_RTC] = { MM_RTC },
+};
+
+/*
+ * This table maps reset line IDs into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the affected node to grant it access to EEMI calls for
+ * resetting that node.
+ */
+#define XILPM_RESET_IDX(n) (n - XILPM_RESET_PCIE_CFG)
+static const struct pm_access pm_reset_access[] = {
+ [XILPM_RESET_IDX(XILPM_RESET_PCIE_CFG)] = { MM_AXIPCIE_MAIN },
+ [XILPM_RESET_IDX(XILPM_RESET_PCIE_BRIDGE)] = { MM_PCIE_ATTRIB },
+ [XILPM_RESET_IDX(XILPM_RESET_PCIE_CTRL)] = { MM_PCIE_ATTRIB },
+
+ [XILPM_RESET_IDX(XILPM_RESET_DP)] = { MM_DP },
+ [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRF)] = { MM_SWDT },
+ [XILPM_RESET_IDX(XILPM_RESET_AFI_FM5)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_AFI_FM4)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_AFI_FM3)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_AFI_FM2)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_AFI_FM1)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_AFI_FM0)] = { .hwdom_access = true },
+
+ /* Channel 0 grants PM access. */
+ [XILPM_RESET_IDX(XILPM_RESET_GDMA)] = { MM_GDMA_CH0 },
+ [XILPM_RESET_IDX(XILPM_RESET_GPU_PP1)] = { MM_GPU },
+ [XILPM_RESET_IDX(XILPM_RESET_GPU_PP0)] = { MM_GPU },
+ [XILPM_RESET_IDX(XILPM_RESET_GT)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_SATA)] = { MM_SATA_AHCI_HBA },
+
+ [XILPM_RESET_IDX(XILPM_RESET_APM_FPD)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_SOFT)] = { .hwdom_access = true },
+
+ [XILPM_RESET_IDX(XILPM_RESET_GEM0)] = { MM_GEM0 },
+ [XILPM_RESET_IDX(XILPM_RESET_GEM1)] = { MM_GEM1 },
+ [XILPM_RESET_IDX(XILPM_RESET_GEM2)] = { MM_GEM2 },
+ [XILPM_RESET_IDX(XILPM_RESET_GEM3)] = { MM_GEM3 },
+
+ [XILPM_RESET_IDX(XILPM_RESET_QSPI)] = { MM_QSPI },
+ [XILPM_RESET_IDX(XILPM_RESET_UART0)] = { MM_UART0 },
+ [XILPM_RESET_IDX(XILPM_RESET_UART1)] = { MM_UART1 },
+ [XILPM_RESET_IDX(XILPM_RESET_SPI0)] = { MM_SPI0 },
+ [XILPM_RESET_IDX(XILPM_RESET_SPI1)] = { MM_SPI1 },
+ [XILPM_RESET_IDX(XILPM_RESET_SDIO0)] = { MM_SD0 },
+ [XILPM_RESET_IDX(XILPM_RESET_SDIO1)] = { MM_SD1 },
+ [XILPM_RESET_IDX(XILPM_RESET_CAN0)] = { MM_CAN0 },
+ [XILPM_RESET_IDX(XILPM_RESET_CAN1)] = { MM_CAN1 },
+ [XILPM_RESET_IDX(XILPM_RESET_I2C0)] = { MM_I2C0 },
+ [XILPM_RESET_IDX(XILPM_RESET_I2C1)] = { MM_I2C1 },
+ [XILPM_RESET_IDX(XILPM_RESET_TTC0)] = { MM_TTC0 },
+ [XILPM_RESET_IDX(XILPM_RESET_TTC1)] = { MM_TTC1 },
+ [XILPM_RESET_IDX(XILPM_RESET_TTC2)] = { MM_TTC2 },
+ [XILPM_RESET_IDX(XILPM_RESET_TTC3)] = { MM_TTC3 },
+ [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRL)] = { MM_SWDT },
+ [XILPM_RESET_IDX(XILPM_RESET_NAND)] = { MM_NAND },
+ [XILPM_RESET_IDX(XILPM_RESET_ADMA)] = { MM_ADMA_CH0 },
+ [XILPM_RESET_IDX(XILPM_RESET_GPIO)] = { MM_GPIO },
+ [XILPM_RESET_IDX(XILPM_RESET_IOU_CC)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_TIMESTAMP)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_RPU_R50)] = { MM_RPU },
+ [XILPM_RESET_IDX(XILPM_RESET_RPU_R51)] = { MM_RPU },
+ [XILPM_RESET_IDX(XILPM_RESET_RPU_AMBA)] = { MM_RPU },
+ [XILPM_RESET_IDX(XILPM_RESET_OCM)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_RPU_PGE)] = { MM_RPU },
+
+ [XILPM_RESET_IDX(XILPM_RESET_USB0_CORERESET)] = { MM_USB3_0_XHCI },
+ [XILPM_RESET_IDX(XILPM_RESET_USB0_HIBERRESET)] = { MM_USB3_0_XHCI },
+ [XILPM_RESET_IDX(XILPM_RESET_USB0_APB)] = { MM_USB3_0_XHCI },
+
+ [XILPM_RESET_IDX(XILPM_RESET_USB1_CORERESET)] = { MM_USB3_1_XHCI },
+ [XILPM_RESET_IDX(XILPM_RESET_USB1_HIBERRESET)] = { MM_USB3_1_XHCI },
+ [XILPM_RESET_IDX(XILPM_RESET_USB1_APB)] = { MM_USB3_1_XHCI },
+
+ [XILPM_RESET_IDX(XILPM_RESET_IPI)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_APM_LPD)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_RTC)] = { MM_RTC },
+ [XILPM_RESET_IDX(XILPM_RESET_SYSMON)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_AFI_FM6)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_LPD_SWDT)] = { MM_SWDT },
+ [XILPM_RESET_IDX(XILPM_RESET_FPD)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG1)] = { MM_RPU },
+ [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG0)] = { MM_RPU },
+ [XILPM_RESET_IDX(XILPM_RESET_DBG_LPD)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_DBG_FPD)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_APLL)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_DPLL)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_VPLL)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_IOPLL)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_RPLL)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_0)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_1)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_2)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_3)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_4)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_5)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_6)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_7)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_8)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_9)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_10)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_11)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_12)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_13)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_14)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_15)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_16)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_17)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_18)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_19)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_20)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_21)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_22)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_23)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_24)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_25)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_26)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_27)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_28)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_29)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_30)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_31)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_RPU_LS)] = { MM_RPU },
+ [XILPM_RESET_IDX(XILPM_RESET_PS_ONLY)] = { .hwdom_access = true },
+ [XILPM_RESET_IDX(XILPM_RESET_PL)] = { .hwdom_access = true },
+};
+
+/*
+ * This table maps reset line IDs into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the affected node to grant it access to EEMI calls for
+ * resetting that node.
+ */
+static const struct {
+ paddr_t start;
+ paddr_t size;
+ uint32_t mask; /* Zero means no mask, i.e all bits. */
+ enum pm_node_id node;
+ bool hwdom_access;
+ bool readonly;
+} pm_mmio_access[] = {
+ {
+ .start = MM_CRF_APB + R_CRF_APLL_CTRL,
+ .size = R_CRF_ACPU_CTRL,
+ .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_IOPLL_CTRL,
+ .size = R_CRL_RPLL_TO_FPD_CTRL,
+ .hwdom_access = true
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_DP_VIDEO_REF_CTRL,
+ .size = 4, .node = NODE_DP
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_DP_AUDIO_REF_CTRL,
+ .size = 4, .node = NODE_DP
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_DP_STC_REF_CTRL,
+ .size = 4, .node = NODE_DP
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_GPU_REF_CTRL,
+ .size = 4, .node = NODE_GPU
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_SATA_REF_CTRL,
+ .size = 4, .node = NODE_SATA
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_PCIE_REF_CTRL,
+ .size = 4, .node = NODE_PCIE
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_GDMA_REF_CTRL,
+ .size = 4, .node = NODE_GDMA
+ },
+ {
+ .start = MM_CRF_APB + R_CRF_DPDMA_REF_CTRL,
+ .size = 4, .node = NODE_DP
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_USB3_DUAL_REF_CTRL,
+ .size = 4, .node = NODE_USB_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_USB0_BUS_REF_CTRL,
+ .size = 4, .node = NODE_USB_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_USB1_BUS_REF_CTRL,
+ .size = 4, .node = NODE_USB_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_USB1_BUS_REF_CTRL,
+ .size = 4, .node = NODE_USB_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_GEM0_REF_CTRL,
+ .size = 4, .node = NODE_ETH_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_GEM1_REF_CTRL,
+ .size = 4, .node = NODE_ETH_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_GEM2_REF_CTRL,
+ .size = 4, .node = NODE_ETH_2
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_GEM3_REF_CTRL,
+ .size = 4, .node = NODE_ETH_3
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_QSPI_REF_CTRL,
+ .size = 4, .node = NODE_QSPI
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_SDIO0_REF_CTRL,
+ .size = 4, .node = NODE_SD_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_SDIO1_REF_CTRL,
+ .size = 4, .node = NODE_SD_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_UART0_REF_CTRL,
+ .size = 4, .node = NODE_UART_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_UART1_REF_CTRL,
+ .size = 4, .node = NODE_UART_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_SPI0_REF_CTRL,
+ .size = 4, .node = NODE_SPI_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_SPI1_REF_CTRL,
+ .size = 4, .node = NODE_SPI_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_CAN0_REF_CTRL,
+ .size = 4, .node = NODE_CAN_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_CAN1_REF_CTRL,
+ .size = 4, .node = NODE_CAN_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_CPU_R5_CTRL,
+ .size = 4, .node = NODE_RPU
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_IOU_SWITCH_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_CSU_PLL_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PCAP_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_LPD_SWITCH_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_LPD_LSBUS_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_DBG_LPD_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_NAND_REF_CTRL,
+ .size = 4, .node = NODE_NAND
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_ADMA_REF_CTRL,
+ .size = 4, .node = NODE_ADMA
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL0_REF_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL1_REF_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL2_REF_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL3_REF_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL0_THR_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL1_THR_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL2_THR_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL3_THR_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL0_THR_CNT,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL1_THR_CNT,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL2_THR_CNT,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_PL3_THR_CNT,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_GEM_TSU_REF_CTRL,
+ .size = 4, .node = NODE_ETH_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_DLL_REF_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_AMS_REF_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_I2C0_REF_CTRL,
+ .size = 4, .node = NODE_I2C_0
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_I2C1_REF_CTRL,
+ .size = 4, .node = NODE_I2C_1
+ },
+ {
+ .start = MM_CRL_APB + R_CRL_TIMESTAMP_REF_CTRL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_MIO_PIN_0,
+ .size = R_IOU_SLCR_MIO_MST_TRI2,
+ .hwdom_access = true
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_WDT_CLK_SEL,
+ .hwdom_access = true
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_CAN_MIO_CTRL,
+ .size = 4, .mask = 0x1ff, .node = NODE_CAN_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_CAN_MIO_CTRL,
+ .size = 4, .mask = 0x1ff << 15, .node = NODE_CAN_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+ .size = 4, .mask = 0xf, .node = NODE_ETH_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+ .size = 4, .mask = 0xf << 5, .node = NODE_ETH_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+ .size = 4, .mask = 0xf << 10, .node = NODE_ETH_2
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+ .size = 4, .mask = 0xf << 15, .node = NODE_ETH_3
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+ .size = 4, .mask = 0x7 << 20, .hwdom_access = true
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_SDIO_CLK_CTRL,
+ .size = 4, .mask = 0x7, .node = NODE_SD_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_SDIO_CLK_CTRL,
+ .size = 4, .mask = 0x7 << 17, .node = NODE_SD_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_CTRL_REG_SD,
+ .size = 4, .mask = 0x1, .node = NODE_SD_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_CTRL_REG_SD,
+ .size = 4, .mask = 0x1 << 15, .node = NODE_SD_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_SD_ITAPDLY,
+ .size = R_IOU_SLCR_SD_CDN_CTRL,
+ .mask = 0x3ff << 0, .node = NODE_SD_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_SD_ITAPDLY,
+ .size = R_IOU_SLCR_SD_CDN_CTRL,
+ .mask = 0x3ff << 16, .node = NODE_SD_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+ .size = 4, .mask = 0x3 << 0, .node = NODE_ETH_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+ .size = 4, .mask = 0x3 << 2, .node = NODE_ETH_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+ .size = 4, .mask = 0x3 << 4, .node = NODE_ETH_2
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+ .size = 4, .mask = 0x3 << 6, .node = NODE_ETH_3
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+ .size = 4, .mask = 0x3 << 0, .node = NODE_TTC_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+ .size = 4, .mask = 0x3 << 2, .node = NODE_TTC_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+ .size = 4, .mask = 0x3 << 4, .node = NODE_TTC_2
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+ .size = 4, .mask = 0x3 << 6, .node = NODE_TTC_3
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TAPDLY_BYPASS,
+ .size = 4, .mask = 0x3, .node = NODE_NAND
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TAPDLY_BYPASS,
+ .size = 4, .mask = 0x1 << 2, .node = NODE_QSPI
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 0, .node = NODE_ETH_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 4, .node = NODE_ETH_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 8, .node = NODE_ETH_2
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 12, .node = NODE_ETH_3
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 16, .node = NODE_SD_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 20, .node = NODE_SD_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 24, .node = NODE_NAND
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+ .size = 4, .mask = 0xf << 28, .node = NODE_QSPI
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_VIDEO_PSS_CLK_SEL,
+ .size = 4, .hwdom_access = true
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM0,
+ .size = 4, .node = NODE_ETH_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM1,
+ .size = 4, .node = NODE_ETH_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM2,
+ .size = 4, .node = NODE_ETH_2
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM3,
+ .size = 4, .node = NODE_ETH_3
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_SD0,
+ .size = 4, .node = NODE_SD_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_SD1,
+ .size = 4, .node = NODE_SD_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_CAN0,
+ .size = 4, .node = NODE_CAN_0
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_CAN1,
+ .size = 4, .node = NODE_CAN_1
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_LQSPI,
+ .size = 4, .node = NODE_QSPI
+ },
+ {
+ .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_NAND,
+ .size = 4, .node = NODE_NAND
+ },
+ {
+ .start = MM_PMU_GLOBAL + R_PMU_GLOBAL_PWR_STATE,
+ .size = 4, .readonly = true, .hwdom_access = true
+ },
+ {
+ .start = MM_PMU_GLOBAL + R_PMU_GLOBAL_GLOBAL_GEN_STORAGE0,
+ .size = R_PMU_GLOBAL_PERS_GLOB_GEN_STORAGE7,
+ .readonly = true, .hwdom_access = true
+ },
+ {
+ /* Universal read-only access to CRF. Linux CCF needs this. */
+ .start = MM_CRF_APB, .size = 0x104, .readonly = true,
+ .hwdom_access = true
+ },
+ {
+ /* Universal read-only access to CRL. Linux CCF needs this. */
+ .start = MM_CRL_APB, .size = 0x284, .readonly = true,
+ .hwdom_access = true
+ }
+};
+
+static bool pm_check_access(const struct pm_access *acl, struct domain *d,
+ uint32_t idx)
+{
+ unsigned long mfn;
+
+ if ( acl[idx].hwdom_access && is_hardware_domain(d) )
+ return true;
+
+ if ( !acl[idx].addr )
+ return false;
+
+ mfn = PFN_DOWN(acl[idx].addr);
+ return iomem_access_permitted(d, mfn, mfn);
+}
+
+/* Check if a domain has access to a node. */
+static bool domain_has_node_access(struct domain *d, uint32_t nodeid)
+{
+ if ( nodeid > ARRAY_SIZE(pm_node_access) )
+ return false;
+
+ return pm_check_access(pm_node_access, d, nodeid);
+}
+
+/* Check if a domain has access to a reset line. */
+static bool domain_has_reset_access(struct domain *d, uint32_t rst)
+{
+ if ( rst < XILPM_RESET_PCIE_CFG )
+ return false;
+
+ rst -= XILPM_RESET_PCIE_CFG;
+
+ if ( rst > ARRAY_SIZE(pm_reset_access) )
+ return false;
+
+ return pm_check_access(pm_reset_access, d, rst);
+}
+
+/*
+ * Check if a given domain has access to perform an indirect
+ * MMIO access.
+ *
+ * If the provided mask is invalid, it will be fixed up.
+ */
+static bool domain_has_mmio_access(struct domain *d,
+ bool write, paddr_t addr,
+ uint32_t *mask)
+{
+ unsigned int i;
+ bool ret = false;
+ uint32_t prot_mask = 0;
+
+ /*
+ * The hardware domain gets read access to everything.
+ * Lower layers will do further filtering.
+ */
+ if ( !write && is_hardware_domain(d) )
+ return true;
+
+ /* Scan the ACL. */
+ for ( i = 0; i < ARRAY_SIZE(pm_mmio_access); i++ )
+ {
+ if ( addr < pm_mmio_access[i].start )
+ return false;
+ if ( addr > pm_mmio_access[i].start + pm_mmio_access[i].size )
+ continue;
+
+ if ( write && pm_mmio_access[i].readonly )
+ return false;
+ if ( pm_mmio_access[i].hwdom_access && !is_hardware_domain(d) )
+ return false;
+ if ( !domain_has_node_access(d, pm_mmio_access[i].node) )
+ return false;
+
+ /* We've got access to this reg (or parts of it). */
+ ret = true;
+
+ /* Permit write access to selected bits. */
+ prot_mask |= pm_mmio_access[i].mask ?: 0xFFFFFFFF;
+ break;
+ }
+
+ /* Masking only applies to writes. */
+ if ( write )
+ *mask &= prot_mask;
+
+ return ret;
+}
+
bool zynqmp_eemi(struct cpu_user_regs *regs)
{
return false;
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-08-11 0:01 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-11 0:01 [PATCH v3 0/6] zynqmp: Add forwarding of platform specific firmware calls Stefano Stabellini
2018-08-11 0:01 ` [PATCH v3 1/6] xen/arm: introduce platform_smc Stefano Stabellini
2018-08-23 15:37 ` Julien Grall
2018-08-23 23:40 ` Stefano Stabellini
2018-08-11 0:01 ` [PATCH v3 2/6] xen/arm: zynqmp: Forward plaform specific firmware calls Stefano Stabellini
2018-08-23 15:41 ` Julien Grall
2018-08-23 23:56 ` Stefano Stabellini
2018-08-24 9:01 ` Julien Grall
2018-10-10 21:50 ` Stefano Stabellini
2018-08-11 0:01 ` [PATCH v3 3/6] xen/arm: zynqmp: introduce zynqmp specific defines Stefano Stabellini
2018-08-11 0:01 ` Stefano Stabellini [this message]
2018-08-28 16:05 ` [PATCH v3 4/6] xen/arm: zynqmp: eemi access control Julien Grall
2018-10-10 22:35 ` Stefano Stabellini
2018-10-15 7:25 ` Julien Grall
2018-10-15 13:00 ` Julien Grall
2018-10-16 2:39 ` Stefano Stabellini
2018-10-16 13:23 ` Julien Grall
2018-10-17 13:58 ` Stefano Stabellini
2018-10-16 13:29 ` Julien Grall
2018-10-17 14:03 ` Stefano Stabellini
2018-10-17 14:26 ` Julien Grall
2018-08-11 0:01 ` [PATCH v3 5/6] xen/arm: zynqmp: implement zynqmp_eemi Stefano Stabellini
2018-08-28 16:29 ` Julien Grall
2018-10-10 22:49 ` Stefano Stabellini
2018-10-15 7:32 ` Julien Grall
2018-10-15 13:01 ` Julien Grall
2018-10-16 6:48 ` Stefano Stabellini
2018-10-16 13:44 ` Julien Grall
2018-08-11 0:01 ` [PATCH v3 6/6] xen/arm: zynqmp: Remove blacklist of ZynqMP's PM node Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1533945710-15159-4-git-send-email-sstabellini@kernel.org \
--to=sstabellini@kernel.org \
--cc=edgar.iglesias@xilinx.com \
--cc=julien.grall@arm.com \
--cc=stefanos@xilinx.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).