* [PATCH v2 0/2] ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
@ 2021-02-08 5:25 Bin Meng
2021-02-08 5:25 ` [PATCH v2 1/2] hw/ssi: xilinx_spips: Clean up coding convention issues Bin Meng
2021-02-08 5:25 ` [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support Bin Meng
0 siblings, 2 replies; 11+ messages in thread
From: Bin Meng @ 2021-02-08 5:25 UTC (permalink / raw)
To: Alistair Francis, Edgar E . Iglesias, Peter Maydell, qemu-arm,
qemu-devel
Cc: Bin Meng
From: Bin Meng <bin.meng@windriver.com>
is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
crash. This is observed when testing VxWorks 7.
Add a basic implementation of QSPI DMA functionality.
Changes in v2:
- Remove unconnected TYPE_STREAM_SINK link property
- Add a TYPE_MEMORY_REGION link property, to allow board codes to tell
the device what its view of the world that it is doing DMA to is
- Replace cpu_physical_memory_write() with address_space_write()
Xuzhou Cheng (2):
hw/ssi: xilinx_spips: Clean up coding convention issues
hw/ssi: xilinx_spips: Implement basic QSPI DMA support
include/hw/ssi/xilinx_spips.h | 3 +-
hw/ssi/xilinx_spips.c | 230 ++++++++++++++++++++++++++++++++++++------
2 files changed, 200 insertions(+), 33 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/2] hw/ssi: xilinx_spips: Clean up coding convention issues
2021-02-08 5:25 [PATCH v2 0/2] ZynqMP QSPI supports SPI transfer using DMA mode, but currently this Bin Meng
@ 2021-02-08 5:25 ` Bin Meng
2021-02-08 5:25 ` [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support Bin Meng
1 sibling, 0 replies; 11+ messages in thread
From: Bin Meng @ 2021-02-08 5:25 UTC (permalink / raw)
To: Alistair Francis, Edgar E . Iglesias, Peter Maydell, qemu-arm,
qemu-devel
Cc: Xuzhou Cheng, Bin Meng
From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
There are some coding convention warnings in xilinx_spips.c,
as reported by:
$ ./scripts/checkpatch.pl hw/ssi/xilinx_spips.c
Let's clean them up.
Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
---
(no changes since v1)
hw/ssi/xilinx_spips.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
index a897034..8a0cc22 100644
--- a/hw/ssi/xilinx_spips.c
+++ b/hw/ssi/xilinx_spips.c
@@ -176,7 +176,8 @@
FIELD(GQSPI_FIFO_CTRL, GENERIC_FIFO_RESET, 0, 1)
#define R_GQSPI_GFIFO_THRESH (0x150 / 4)
#define R_GQSPI_DATA_STS (0x15c / 4)
-/* We use the snapshot register to hold the core state for the currently
+/*
+ * We use the snapshot register to hold the core state for the currently
* or most recently executed command. So the generic fifo format is defined
* for the snapshot register
*/
@@ -424,7 +425,8 @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
xlnx_zynqmp_qspips_update_ixr(s);
}
-/* N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
+/*
+ * N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
* column wise (from element 0 to N-1). num is the length of x, and dir
* reverses the direction of the transform. Best illustrated by example:
* Each digit in the below array is a single bit (num == 3):
@@ -637,8 +639,10 @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
tx_rx[i] = tx;
}
} else {
- /* Extract a dummy byte and generate dummy cycles according to the
- * link state */
+ /*
+ * Extract a dummy byte and generate dummy cycles according to the
+ * link state
+ */
tx = fifo8_pop(&s->tx_fifo);
dummy_cycles = 8 / s->link_state;
}
@@ -721,8 +725,9 @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
}
break;
case (SNOOP_ADDR):
- /* Address has been transmitted, transmit dummy cycles now if
- * needed */
+ /*
+ * Address has been transmitted, transmit dummy cycles now if needed
+ */
if (s->cmd_dummies < 0) {
s->snoop_state = SNOOP_NONE;
} else {
@@ -876,7 +881,7 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
}
static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
- unsigned size)
+ unsigned size)
{
XilinxSPIPS *s = opaque;
uint32_t mask = ~0;
@@ -970,7 +975,7 @@ static uint64_t xlnx_zynqmp_qspips_read(void *opaque,
}
static void xilinx_spips_write(void *opaque, hwaddr addr,
- uint64_t value, unsigned size)
+ uint64_t value, unsigned size)
{
int mask = ~0;
XilinxSPIPS *s = opaque;
@@ -1072,7 +1077,7 @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
}
static void xlnx_zynqmp_qspips_write(void *opaque, hwaddr addr,
- uint64_t value, unsigned size)
+ uint64_t value, unsigned size)
{
XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(opaque);
uint32_t reg = addr / 4;
--
2.7.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-08 5:25 [PATCH v2 0/2] ZynqMP QSPI supports SPI transfer using DMA mode, but currently this Bin Meng
2021-02-08 5:25 ` [PATCH v2 1/2] hw/ssi: xilinx_spips: Clean up coding convention issues Bin Meng
@ 2021-02-08 5:25 ` Bin Meng
2021-02-08 12:44 ` Edgar E. Iglesias
1 sibling, 1 reply; 11+ messages in thread
From: Bin Meng @ 2021-02-08 5:25 UTC (permalink / raw)
To: Alistair Francis, Edgar E . Iglesias, Peter Maydell, qemu-arm,
qemu-devel
Cc: Xuzhou Cheng, Bin Meng
From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
crash. This is observed when testing VxWorks 7.
Add a basic implementation of QSPI DMA functionality.
Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
---
Changes in v2:
- Remove unconnected TYPE_STREAM_SINK link property
- Add a TYPE_MEMORY_REGION link property, to allow board codes to tell
the device what its view of the world that it is doing DMA to is
- Replace cpu_physical_memory_write() with address_space_write()
include/hw/ssi/xilinx_spips.h | 3 +-
hw/ssi/xilinx_spips.c | 207 +++++++++++++++++++++++++++++++++++++-----
2 files changed, 186 insertions(+), 24 deletions(-)
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
index 3eae734..2478839 100644
--- a/include/hw/ssi/xilinx_spips.h
+++ b/include/hw/ssi/xilinx_spips.h
@@ -99,7 +99,8 @@ typedef struct XilinxQSPIPS XilinxQSPIPS;
struct XlnxZynqMPQSPIPS {
XilinxQSPIPS parent_obj;
- StreamSink *dma;
+ MemoryRegion *dma_mr;
+ AddressSpace *dma_as;
int gqspi_irqline;
uint32_t regs[XLNX_ZYNQMP_SPIPS_R_MAX];
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
index 8a0cc22..9caca7b 100644
--- a/hw/ssi/xilinx_spips.c
+++ b/hw/ssi/xilinx_spips.c
@@ -195,12 +195,72 @@
#define R_GQSPI_MOD_ID (0x1fc / 4)
#define R_GQSPI_MOD_ID_RESET (0x10a0000)
-#define R_QSPIDMA_DST_CTRL (0x80c / 4)
-#define R_QSPIDMA_DST_CTRL_RESET (0x803ffa00)
-#define R_QSPIDMA_DST_I_MASK (0x820 / 4)
-#define R_QSPIDMA_DST_I_MASK_RESET (0xfe)
-#define R_QSPIDMA_DST_CTRL2 (0x824 / 4)
-#define R_QSPIDMA_DST_CTRL2_RESET (0x081bfff8)
+#define GQSPI_CNFG_MODE_EN_IO (0)
+#define GQSPI_CNFG_MODE_EN_DMA (2)
+
+/*
+ * Ref: UG1087 (v1.7) February 8, 2019
+ * https://www.xilinx.com/html_docs/registers/ug1087/ug1087-zynq-ultrascale-registers.html
+ */
+REG32(GQSPI_DMA_ADDR, 0x800)
+ FIELD(GQSPI_DMA_ADDR, ADDR, 2, 30)
+REG32(GQSPI_DMA_SIZE, 0x804)
+ FIELD(GQSPI_DMA_SIZE, SIZE, 2, 27)
+REG32(GQSPI_DMA_STS, 0x808)
+ FIELD(GQSPI_DMA_STS, DONE_CNT, 13, 3)
+ FIELD(GQSPI_DMA_STS, BUSY, 0, 1)
+REG32(GQSPI_DMA_CTRL, 0x80c)
+ FIELD(GQSPI_DMA_CTRL, FIFO_LVL_HIT_THRESH, 25, 7)
+ FIELD(GQSPI_DMA_CTRL, APB_ERR_RESP, 24, 1)
+ FIELD(GQSPI_DMA_CTRL, ENDIANNESS, 23, 1)
+ FIELD(GQSPI_DMA_CTRL, AXI_BRST_TYPE, 22, 1)
+ FIELD(GQSPI_DMA_CTRL, TIMEOUT_VAL, 10, 12)
+ FIELD(GQSPI_DMA_CTRL, FIFO_THRESH, 2, 8)
+ FIELD(GQSPI_DMA_CTRL, PAUSE_STRM, 1, 1)
+ FIELD(GQSPI_DMA_CTRL, PAUSE_MEM, 0, 1)
+REG32(GQSPI_DMA_I_STS, 0x814)
+ FIELD(GQSPI_DMA_I_STS, FIFO_OVERFLOW, 7, 1)
+ FIELD(GQSPI_DMA_I_STS, INVALID_APB, 6, 1)
+ FIELD(GQSPI_DMA_I_STS, THRESH_HIT, 5, 1)
+ FIELD(GQSPI_DMA_I_STS, TIMEOUT_MEM, 4, 1)
+ FIELD(GQSPI_DMA_I_STS, TIMEOUT_STRM, 3, 1)
+ FIELD(GQSPI_DMA_I_STS, AXI_BRESP_ERR, 2, 1)
+ FIELD(GQSPI_DMA_I_STS, DONE, 1, 1)
+REG32(GQSPI_DMA_I_EN, 0x818)
+ FIELD(GQSPI_DMA_I_EN, FIFO_OVERFLOW, 7, 1)
+ FIELD(GQSPI_DMA_I_EN, INVALID_APB, 6, 1)
+ FIELD(GQSPI_DMA_I_EN, THRESH_HIT, 5, 1)
+ FIELD(GQSPI_DMA_I_EN, TIMEOUT_MEM, 4, 1)
+ FIELD(GQSPI_DMA_I_EN, TIMEOUT_STRM, 3, 1)
+ FIELD(GQSPI_DMA_I_EN, AXI_BRESP_ERR, 2, 1)
+ FIELD(GQSPI_DMA_I_EN, DONE, 1, 1)
+REG32(GQSPI_DMA_I_DIS, 0x81c)
+ FIELD(GQSPI_DMA_I_DIS, FIFO_OVERFLOW, 7, 1)
+ FIELD(GQSPI_DMA_I_DIS, INVALID_APB, 6, 1)
+ FIELD(GQSPI_DMA_I_DIS, THRESH_HIT, 5, 1)
+ FIELD(GQSPI_DMA_I_DIS, TIMEOUT_MEM, 4, 1)
+ FIELD(GQSPI_DMA_I_DIS, TIMEOUT_STRM, 3, 1)
+ FIELD(GQSPI_DMA_I_DIS, AXI_BRESP_ERR, 2, 1)
+ FIELD(GQSPI_DMA_I_DIS, DONE, 1, 1)
+REG32(GQSPI_DMA_I_MASK, 0x820)
+ FIELD(GQSPI_DMA_I_MASK, FIFO_OVERFLOW, 7, 1)
+ FIELD(GQSPI_DMA_I_MASK, INVALID_APB, 6, 1)
+ FIELD(GQSPI_DMA_I_MASK, THRESH_HIT, 5, 1)
+ FIELD(GQSPI_DMA_I_MASK, TIMEOUT_MEM, 4, 1)
+ FIELD(GQSPI_DMA_I_MASK, TIMEOUT_STRM, 3, 1)
+ FIELD(GQSPI_DMA_I_MASK, AXI_BRESP_ERR, 2, 1)
+ FIELD(GQSPI_DMA_I_MASK, DONE, 1, 1)
+REG32(GQSPI_DMA_CTRL2, 0x824)
+ FIELD(GQSPI_DMA_CTRL2, ARCACHE, 24, 3)
+ FIELD(GQSPI_DMA_CTRL2, TIMEOUT_EN, 22, 1)
+ FIELD(GQSPI_DMA_CTRL2, TIMEOUT_PRE, 4, 12)
+ FIELD(GQSPI_DMA_CTRL2, MAX_OUTS_CMDS, 0, 4)
+REG32(GQSPI_DMA_ADDR_MSB, 0x828)
+ FIELD(GQSPI_DMA_ADDR_MSB, ADDR_MSB, 0, 12)
+
+#define R_GQSPI_DMA_CTRL_RESET (0x803ffa00)
+#define R_GQSPI_DMA_INT_MASK (0xfe)
+#define R_GQSPI_DMA_CTRL2_RESET (0x081bfff8)
/* size of TXRX FIFOs */
#define RXFF_A (128)
@@ -341,6 +401,7 @@ static void xilinx_spips_update_ixr(XilinxSPIPS *s)
static void xlnx_zynqmp_qspips_update_ixr(XlnxZynqMPQSPIPS *s)
{
uint32_t gqspi_int;
+ uint32_t mode;
int new_irqline;
s->regs[R_GQSPI_ISR] &= ~IXR_SELF_CLEAR;
@@ -359,13 +420,20 @@ static void xlnx_zynqmp_qspips_update_ixr(XlnxZynqMPQSPIPS *s)
IXR_TX_FIFO_NOT_FULL : 0);
/* GQSPI Interrupt Trigger Status */
- gqspi_int = (~s->regs[R_GQSPI_IMR]) & s->regs[R_GQSPI_ISR] & GQSPI_IXR_MASK;
- new_irqline = !!(gqspi_int & IXR_ALL);
-
- /* drive external interrupt pin */
- if (new_irqline != s->gqspi_irqline) {
- s->gqspi_irqline = new_irqline;
- qemu_set_irq(XILINX_SPIPS(s)->irq, s->gqspi_irqline);
+ mode = ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, MODE_EN);
+ if (mode == GQSPI_CNFG_MODE_EN_IO) {
+ gqspi_int = (~s->regs[R_GQSPI_IMR]) & s->regs[R_GQSPI_ISR] \
+ & GQSPI_IXR_MASK;
+ new_irqline = !!(gqspi_int & IXR_ALL);
+
+ /* drive external interrupt pin */
+ if (new_irqline != s->gqspi_irqline) {
+ s->gqspi_irqline = new_irqline;
+ qemu_set_irq(XILINX_SPIPS(s)->irq, s->gqspi_irqline);
+ }
+ } else if (mode == GQSPI_CNFG_MODE_EN_DMA) {
+ new_irqline = s->regs[R_GQSPI_DMA_I_STS] & ~s->regs[R_GQSPI_DMA_I_MASK];
+ qemu_set_irq(XILINX_SPIPS(s)->irq, !!new_irqline);
}
}
@@ -417,9 +485,9 @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
s->regs[R_GQSPI_GPIO] = 1;
s->regs[R_GQSPI_LPBK_DLY_ADJ] = R_GQSPI_LPBK_DLY_ADJ_RESET;
s->regs[R_GQSPI_MOD_ID] = R_GQSPI_MOD_ID_RESET;
- s->regs[R_QSPIDMA_DST_CTRL] = R_QSPIDMA_DST_CTRL_RESET;
- s->regs[R_QSPIDMA_DST_I_MASK] = R_QSPIDMA_DST_I_MASK_RESET;
- s->regs[R_QSPIDMA_DST_CTRL2] = R_QSPIDMA_DST_CTRL2_RESET;
+ s->regs[R_GQSPI_DMA_CTRL] = R_GQSPI_DMA_CTRL_RESET;
+ s->regs[R_GQSPI_DMA_I_MASK] = R_GQSPI_DMA_INT_MASK;
+ s->regs[R_GQSPI_DMA_CTRL2] = R_GQSPI_DMA_CTRL2_RESET;
s->man_start_com_g = false;
s->gqspi_irqline = 0;
xlnx_zynqmp_qspips_update_ixr(s);
@@ -843,6 +911,66 @@ static const void *pop_buf(Fifo8 *fifo, uint32_t max, uint32_t *num)
return ret;
}
+static void xlnx_zynqmp_gspips_dma_done(XlnxZynqMPQSPIPS *s)
+{
+ int cnt;
+
+ s->regs[R_GQSPI_DMA_STS] &= ~R_GQSPI_DMA_STS_BUSY_MASK;
+ s->regs[R_GQSPI_DMA_I_STS] |= R_GQSPI_DMA_I_STS_DONE_MASK;
+
+ cnt = ARRAY_FIELD_EX32(s->regs, GQSPI_DMA_STS, DONE_CNT) + 1;
+ ARRAY_FIELD_DP32(s->regs, GQSPI_DMA_STS, DONE_CNT, cnt);
+
+}
+
+static uint32_t xlnx_zynqmp_gspips_dma_advance(XlnxZynqMPQSPIPS *s,
+ uint32_t len, hwaddr dst)
+{
+ uint32_t size = s->regs[R_GQSPI_DMA_SIZE];
+
+ size -= len;
+ size &= R_GQSPI_DMA_SIZE_SIZE_MASK;
+ dst += len;
+
+ s->regs[R_GQSPI_DMA_SIZE] = size;
+ s->regs[R_GQSPI_DMA_ADDR] = (uint32_t) dst;
+ s->regs[R_GQSPI_DMA_ADDR_MSB] = dst >> 32;
+
+ return size;
+}
+
+static size_t xlnx_zynqmp_gspips_dma_push(XlnxZynqMPQSPIPS *s,
+ uint8_t *buf, size_t len, bool eop)
+{
+ hwaddr dst = (hwaddr)s->regs[R_GQSPI_DMA_ADDR_MSB] << 32
+ | s->regs[R_GQSPI_DMA_ADDR];
+ uint32_t size = s->regs[R_GQSPI_DMA_SIZE];
+ uint32_t mlen = MIN(size, len) & (~3); /* Size is word aligned */
+
+ if (size == 0 || len <= 0) {
+ return 0;
+ }
+
+ if (address_space_write(s->dma_as, dst, MEMTXATTRS_UNSPECIFIED, buf, mlen)
+ != MEMTX_OK) {
+ return 0;
+ }
+
+ size = xlnx_zynqmp_gspips_dma_advance(s, mlen, dst);
+
+ if (size == 0) {
+ xlnx_zynqmp_gspips_dma_done(s);
+ xlnx_zynqmp_qspips_update_ixr(s);
+ }
+
+ return mlen;
+}
+
+static bool xlnx_zynqmp_gspips_dma_can_push(XlnxZynqMPQSPIPS *s)
+{
+ return s->regs[R_GQSPI_DMA_SIZE] ? true : false;
+}
+
static void xlnx_zynqmp_qspips_notify(void *opaque)
{
XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(opaque);
@@ -850,7 +978,8 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
Fifo8 *recv_fifo;
if (ARRAY_FIELD_EX32(rq->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
- if (!(ARRAY_FIELD_EX32(rq->regs, GQSPI_CNFG, MODE_EN) == 2)) {
+ if (ARRAY_FIELD_EX32(rq->regs, GQSPI_CNFG, MODE_EN)
+ != GQSPI_CNFG_MODE_EN_DMA) {
return;
}
recv_fifo = &rq->rx_fifo_g;
@@ -861,7 +990,7 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
recv_fifo = &s->rx_fifo;
}
while (recv_fifo->num >= 4
- && stream_can_push(rq->dma, xlnx_zynqmp_qspips_notify, rq))
+ && xlnx_zynqmp_gspips_dma_can_push(rq))
{
size_t ret;
uint32_t num;
@@ -874,7 +1003,7 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
memcpy(rq->dma_buf, rxd, num);
- ret = stream_push(rq->dma, rq->dma_buf, num, false);
+ ret = xlnx_zynqmp_gspips_dma_push(rq, rq->dma_buf, num, false);
assert(ret == num);
xlnx_zynqmp_qspips_check_flush(rq);
}
@@ -1127,6 +1256,31 @@ static void xlnx_zynqmp_qspips_write(void *opaque, hwaddr addr,
case R_GQSPI_GF_SNAPSHOT:
case R_GQSPI_MOD_ID:
break;
+ case R_GQSPI_DMA_ADDR:
+ s->regs[R_GQSPI_DMA_ADDR] = value & R_GQSPI_DMA_ADDR_ADDR_MASK;
+ break;
+ case R_GQSPI_DMA_SIZE:
+ s->regs[R_GQSPI_DMA_SIZE] = value & R_GQSPI_DMA_SIZE_SIZE_MASK;
+ break;
+ case R_GQSPI_DMA_STS:
+ s->regs[R_GQSPI_DMA_STS] &= ~(value &
+ R_GQSPI_DMA_STS_DONE_CNT_MASK);
+ break;
+ case R_GQSPI_DMA_I_EN:
+ s->regs[R_GQSPI_DMA_I_EN] = value & R_GQSPI_DMA_INT_MASK;
+ s->regs[R_GQSPI_DMA_I_MASK] &= ~s->regs[R_GQSPI_DMA_I_EN];
+ s->regs[R_GQSPI_DMA_I_DIS] &= ~s->regs[R_GQSPI_DMA_I_EN];
+ break;
+ case R_GQSPI_DMA_I_DIS:
+ s->regs[R_GQSPI_DMA_I_DIS] |= value & R_GQSPI_DMA_INT_MASK;
+ s->regs[R_GQSPI_DMA_I_MASK] |= s->regs[R_GQSPI_DMA_I_DIS];
+ s->regs[R_GQSPI_DMA_I_EN] &= ~s->regs[R_GQSPI_DMA_I_DIS];
+ s->regs[R_GQSPI_DMA_STS] &= 0;
+ break;
+ case R_GQSPI_DMA_ADDR_MSB:
+ s->regs[R_GQSPI_DMA_ADDR_MSB] = value &
+ R_GQSPI_DMA_ADDR_MSB_ADDR_MSB_MASK;
+ break;
default:
s->regs[reg] = value;
break;
@@ -1353,15 +1507,22 @@ static void xlnx_zynqmp_qspips_realize(DeviceState *dev, Error **errp)
fifo8_create(&s->rx_fifo_g, xsc->rx_fifo_size);
fifo8_create(&s->tx_fifo_g, xsc->tx_fifo_size);
fifo32_create(&s->fifo_g, 32);
+
+ if (s->dma_mr) {
+ s->dma_as = g_malloc0(sizeof(AddressSpace));
+ address_space_init(s->dma_as, s->dma_mr, NULL);
+ } else {
+ s->dma_as = &address_space_memory;
+ }
}
static void xlnx_zynqmp_qspips_init(Object *obj)
{
- XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(obj);
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(obj);
- object_property_add_link(obj, "stream-connected-dma", TYPE_STREAM_SINK,
- (Object **)&rq->dma,
- object_property_allow_set_link,
+ object_property_add_link(obj, "xlnx_zynqmp_qspips_dma", TYPE_MEMORY_REGION,
+ (Object **)&s->dma_mr,
+ qdev_prop_allow_set_link_before_realize,
OBJ_PROP_LINK_STRONG);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-08 5:25 ` [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support Bin Meng
@ 2021-02-08 12:44 ` Edgar E. Iglesias
2021-02-08 14:10 ` Bin Meng
0 siblings, 1 reply; 11+ messages in thread
From: Edgar E. Iglesias @ 2021-02-08 12:44 UTC (permalink / raw)
To: Bin Meng
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng, qemu-devel,
francisco.iglesias, qemu-arm, Alistair Francis
On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
> From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
>
> ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
> is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
> crash. This is observed when testing VxWorks 7.
>
> Add a basic implementation of QSPI DMA functionality.
>
> Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> Signed-off-by: Bin Meng <bin.meng@windriver.com>
+ Francisco
Hi,
Like Peter commented on the previous version, the DMA unit is actully separate.
This module is better modelled by pushing data through the Stream framework
into the DMA. The DMA model is not upstream but can be found here:
https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
Feel free to send a patch to upstream with that model (perhaps changing
the filename to something more suitable, e.g xlnx-csu-stream-dma.c).
You can use --author="Edgar E. Iglesias <edgar.iglesias@xilinx.com>".
The DMA should be mapped to 0xFF0F0800 and IRQ 15.
CC:d Francisco, he's going to publish some smoke-tests for this.
Cheers,
Edgar
>
> ---
>
> Changes in v2:
> - Remove unconnected TYPE_STREAM_SINK link property
> - Add a TYPE_MEMORY_REGION link property, to allow board codes to tell
> the device what its view of the world that it is doing DMA to is
> - Replace cpu_physical_memory_write() with address_space_write()
>
> include/hw/ssi/xilinx_spips.h | 3 +-
> hw/ssi/xilinx_spips.c | 207 +++++++++++++++++++++++++++++++++++++-----
> 2 files changed, 186 insertions(+), 24 deletions(-)
>
> diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
> index 3eae734..2478839 100644
> --- a/include/hw/ssi/xilinx_spips.h
> +++ b/include/hw/ssi/xilinx_spips.h
> @@ -99,7 +99,8 @@ typedef struct XilinxQSPIPS XilinxQSPIPS;
> struct XlnxZynqMPQSPIPS {
> XilinxQSPIPS parent_obj;
>
> - StreamSink *dma;
> + MemoryRegion *dma_mr;
> + AddressSpace *dma_as;
> int gqspi_irqline;
>
> uint32_t regs[XLNX_ZYNQMP_SPIPS_R_MAX];
> diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
> index 8a0cc22..9caca7b 100644
> --- a/hw/ssi/xilinx_spips.c
> +++ b/hw/ssi/xilinx_spips.c
> @@ -195,12 +195,72 @@
> #define R_GQSPI_MOD_ID (0x1fc / 4)
> #define R_GQSPI_MOD_ID_RESET (0x10a0000)
>
> -#define R_QSPIDMA_DST_CTRL (0x80c / 4)
> -#define R_QSPIDMA_DST_CTRL_RESET (0x803ffa00)
> -#define R_QSPIDMA_DST_I_MASK (0x820 / 4)
> -#define R_QSPIDMA_DST_I_MASK_RESET (0xfe)
> -#define R_QSPIDMA_DST_CTRL2 (0x824 / 4)
> -#define R_QSPIDMA_DST_CTRL2_RESET (0x081bfff8)
> +#define GQSPI_CNFG_MODE_EN_IO (0)
> +#define GQSPI_CNFG_MODE_EN_DMA (2)
> +
> +/*
> + * Ref: UG1087 (v1.7) February 8, 2019
> + * https://www.xilinx.com/html_docs/registers/ug1087/ug1087-zynq-ultrascale-registers.html
> + */
> +REG32(GQSPI_DMA_ADDR, 0x800)
> + FIELD(GQSPI_DMA_ADDR, ADDR, 2, 30)
> +REG32(GQSPI_DMA_SIZE, 0x804)
> + FIELD(GQSPI_DMA_SIZE, SIZE, 2, 27)
> +REG32(GQSPI_DMA_STS, 0x808)
> + FIELD(GQSPI_DMA_STS, DONE_CNT, 13, 3)
> + FIELD(GQSPI_DMA_STS, BUSY, 0, 1)
> +REG32(GQSPI_DMA_CTRL, 0x80c)
> + FIELD(GQSPI_DMA_CTRL, FIFO_LVL_HIT_THRESH, 25, 7)
> + FIELD(GQSPI_DMA_CTRL, APB_ERR_RESP, 24, 1)
> + FIELD(GQSPI_DMA_CTRL, ENDIANNESS, 23, 1)
> + FIELD(GQSPI_DMA_CTRL, AXI_BRST_TYPE, 22, 1)
> + FIELD(GQSPI_DMA_CTRL, TIMEOUT_VAL, 10, 12)
> + FIELD(GQSPI_DMA_CTRL, FIFO_THRESH, 2, 8)
> + FIELD(GQSPI_DMA_CTRL, PAUSE_STRM, 1, 1)
> + FIELD(GQSPI_DMA_CTRL, PAUSE_MEM, 0, 1)
> +REG32(GQSPI_DMA_I_STS, 0x814)
> + FIELD(GQSPI_DMA_I_STS, FIFO_OVERFLOW, 7, 1)
> + FIELD(GQSPI_DMA_I_STS, INVALID_APB, 6, 1)
> + FIELD(GQSPI_DMA_I_STS, THRESH_HIT, 5, 1)
> + FIELD(GQSPI_DMA_I_STS, TIMEOUT_MEM, 4, 1)
> + FIELD(GQSPI_DMA_I_STS, TIMEOUT_STRM, 3, 1)
> + FIELD(GQSPI_DMA_I_STS, AXI_BRESP_ERR, 2, 1)
> + FIELD(GQSPI_DMA_I_STS, DONE, 1, 1)
> +REG32(GQSPI_DMA_I_EN, 0x818)
> + FIELD(GQSPI_DMA_I_EN, FIFO_OVERFLOW, 7, 1)
> + FIELD(GQSPI_DMA_I_EN, INVALID_APB, 6, 1)
> + FIELD(GQSPI_DMA_I_EN, THRESH_HIT, 5, 1)
> + FIELD(GQSPI_DMA_I_EN, TIMEOUT_MEM, 4, 1)
> + FIELD(GQSPI_DMA_I_EN, TIMEOUT_STRM, 3, 1)
> + FIELD(GQSPI_DMA_I_EN, AXI_BRESP_ERR, 2, 1)
> + FIELD(GQSPI_DMA_I_EN, DONE, 1, 1)
> +REG32(GQSPI_DMA_I_DIS, 0x81c)
> + FIELD(GQSPI_DMA_I_DIS, FIFO_OVERFLOW, 7, 1)
> + FIELD(GQSPI_DMA_I_DIS, INVALID_APB, 6, 1)
> + FIELD(GQSPI_DMA_I_DIS, THRESH_HIT, 5, 1)
> + FIELD(GQSPI_DMA_I_DIS, TIMEOUT_MEM, 4, 1)
> + FIELD(GQSPI_DMA_I_DIS, TIMEOUT_STRM, 3, 1)
> + FIELD(GQSPI_DMA_I_DIS, AXI_BRESP_ERR, 2, 1)
> + FIELD(GQSPI_DMA_I_DIS, DONE, 1, 1)
> +REG32(GQSPI_DMA_I_MASK, 0x820)
> + FIELD(GQSPI_DMA_I_MASK, FIFO_OVERFLOW, 7, 1)
> + FIELD(GQSPI_DMA_I_MASK, INVALID_APB, 6, 1)
> + FIELD(GQSPI_DMA_I_MASK, THRESH_HIT, 5, 1)
> + FIELD(GQSPI_DMA_I_MASK, TIMEOUT_MEM, 4, 1)
> + FIELD(GQSPI_DMA_I_MASK, TIMEOUT_STRM, 3, 1)
> + FIELD(GQSPI_DMA_I_MASK, AXI_BRESP_ERR, 2, 1)
> + FIELD(GQSPI_DMA_I_MASK, DONE, 1, 1)
> +REG32(GQSPI_DMA_CTRL2, 0x824)
> + FIELD(GQSPI_DMA_CTRL2, ARCACHE, 24, 3)
> + FIELD(GQSPI_DMA_CTRL2, TIMEOUT_EN, 22, 1)
> + FIELD(GQSPI_DMA_CTRL2, TIMEOUT_PRE, 4, 12)
> + FIELD(GQSPI_DMA_CTRL2, MAX_OUTS_CMDS, 0, 4)
> +REG32(GQSPI_DMA_ADDR_MSB, 0x828)
> + FIELD(GQSPI_DMA_ADDR_MSB, ADDR_MSB, 0, 12)
> +
> +#define R_GQSPI_DMA_CTRL_RESET (0x803ffa00)
> +#define R_GQSPI_DMA_INT_MASK (0xfe)
> +#define R_GQSPI_DMA_CTRL2_RESET (0x081bfff8)
>
> /* size of TXRX FIFOs */
> #define RXFF_A (128)
> @@ -341,6 +401,7 @@ static void xilinx_spips_update_ixr(XilinxSPIPS *s)
> static void xlnx_zynqmp_qspips_update_ixr(XlnxZynqMPQSPIPS *s)
> {
> uint32_t gqspi_int;
> + uint32_t mode;
> int new_irqline;
>
> s->regs[R_GQSPI_ISR] &= ~IXR_SELF_CLEAR;
> @@ -359,13 +420,20 @@ static void xlnx_zynqmp_qspips_update_ixr(XlnxZynqMPQSPIPS *s)
> IXR_TX_FIFO_NOT_FULL : 0);
>
> /* GQSPI Interrupt Trigger Status */
> - gqspi_int = (~s->regs[R_GQSPI_IMR]) & s->regs[R_GQSPI_ISR] & GQSPI_IXR_MASK;
> - new_irqline = !!(gqspi_int & IXR_ALL);
> -
> - /* drive external interrupt pin */
> - if (new_irqline != s->gqspi_irqline) {
> - s->gqspi_irqline = new_irqline;
> - qemu_set_irq(XILINX_SPIPS(s)->irq, s->gqspi_irqline);
> + mode = ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, MODE_EN);
> + if (mode == GQSPI_CNFG_MODE_EN_IO) {
> + gqspi_int = (~s->regs[R_GQSPI_IMR]) & s->regs[R_GQSPI_ISR] \
> + & GQSPI_IXR_MASK;
> + new_irqline = !!(gqspi_int & IXR_ALL);
> +
> + /* drive external interrupt pin */
> + if (new_irqline != s->gqspi_irqline) {
> + s->gqspi_irqline = new_irqline;
> + qemu_set_irq(XILINX_SPIPS(s)->irq, s->gqspi_irqline);
> + }
> + } else if (mode == GQSPI_CNFG_MODE_EN_DMA) {
> + new_irqline = s->regs[R_GQSPI_DMA_I_STS] & ~s->regs[R_GQSPI_DMA_I_MASK];
> + qemu_set_irq(XILINX_SPIPS(s)->irq, !!new_irqline);
> }
> }
>
> @@ -417,9 +485,9 @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
> s->regs[R_GQSPI_GPIO] = 1;
> s->regs[R_GQSPI_LPBK_DLY_ADJ] = R_GQSPI_LPBK_DLY_ADJ_RESET;
> s->regs[R_GQSPI_MOD_ID] = R_GQSPI_MOD_ID_RESET;
> - s->regs[R_QSPIDMA_DST_CTRL] = R_QSPIDMA_DST_CTRL_RESET;
> - s->regs[R_QSPIDMA_DST_I_MASK] = R_QSPIDMA_DST_I_MASK_RESET;
> - s->regs[R_QSPIDMA_DST_CTRL2] = R_QSPIDMA_DST_CTRL2_RESET;
> + s->regs[R_GQSPI_DMA_CTRL] = R_GQSPI_DMA_CTRL_RESET;
> + s->regs[R_GQSPI_DMA_I_MASK] = R_GQSPI_DMA_INT_MASK;
> + s->regs[R_GQSPI_DMA_CTRL2] = R_GQSPI_DMA_CTRL2_RESET;
> s->man_start_com_g = false;
> s->gqspi_irqline = 0;
> xlnx_zynqmp_qspips_update_ixr(s);
> @@ -843,6 +911,66 @@ static const void *pop_buf(Fifo8 *fifo, uint32_t max, uint32_t *num)
> return ret;
> }
>
> +static void xlnx_zynqmp_gspips_dma_done(XlnxZynqMPQSPIPS *s)
> +{
> + int cnt;
> +
> + s->regs[R_GQSPI_DMA_STS] &= ~R_GQSPI_DMA_STS_BUSY_MASK;
> + s->regs[R_GQSPI_DMA_I_STS] |= R_GQSPI_DMA_I_STS_DONE_MASK;
> +
> + cnt = ARRAY_FIELD_EX32(s->regs, GQSPI_DMA_STS, DONE_CNT) + 1;
> + ARRAY_FIELD_DP32(s->regs, GQSPI_DMA_STS, DONE_CNT, cnt);
> +
> +}
> +
> +static uint32_t xlnx_zynqmp_gspips_dma_advance(XlnxZynqMPQSPIPS *s,
> + uint32_t len, hwaddr dst)
> +{
> + uint32_t size = s->regs[R_GQSPI_DMA_SIZE];
> +
> + size -= len;
> + size &= R_GQSPI_DMA_SIZE_SIZE_MASK;
> + dst += len;
> +
> + s->regs[R_GQSPI_DMA_SIZE] = size;
> + s->regs[R_GQSPI_DMA_ADDR] = (uint32_t) dst;
> + s->regs[R_GQSPI_DMA_ADDR_MSB] = dst >> 32;
> +
> + return size;
> +}
> +
> +static size_t xlnx_zynqmp_gspips_dma_push(XlnxZynqMPQSPIPS *s,
> + uint8_t *buf, size_t len, bool eop)
> +{
> + hwaddr dst = (hwaddr)s->regs[R_GQSPI_DMA_ADDR_MSB] << 32
> + | s->regs[R_GQSPI_DMA_ADDR];
> + uint32_t size = s->regs[R_GQSPI_DMA_SIZE];
> + uint32_t mlen = MIN(size, len) & (~3); /* Size is word aligned */
> +
> + if (size == 0 || len <= 0) {
> + return 0;
> + }
> +
> + if (address_space_write(s->dma_as, dst, MEMTXATTRS_UNSPECIFIED, buf, mlen)
> + != MEMTX_OK) {
> + return 0;
> + }
> +
> + size = xlnx_zynqmp_gspips_dma_advance(s, mlen, dst);
> +
> + if (size == 0) {
> + xlnx_zynqmp_gspips_dma_done(s);
> + xlnx_zynqmp_qspips_update_ixr(s);
> + }
> +
> + return mlen;
> +}
> +
> +static bool xlnx_zynqmp_gspips_dma_can_push(XlnxZynqMPQSPIPS *s)
> +{
> + return s->regs[R_GQSPI_DMA_SIZE] ? true : false;
> +}
> +
> static void xlnx_zynqmp_qspips_notify(void *opaque)
> {
> XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(opaque);
> @@ -850,7 +978,8 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
> Fifo8 *recv_fifo;
>
> if (ARRAY_FIELD_EX32(rq->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
> - if (!(ARRAY_FIELD_EX32(rq->regs, GQSPI_CNFG, MODE_EN) == 2)) {
> + if (ARRAY_FIELD_EX32(rq->regs, GQSPI_CNFG, MODE_EN)
> + != GQSPI_CNFG_MODE_EN_DMA) {
> return;
> }
> recv_fifo = &rq->rx_fifo_g;
> @@ -861,7 +990,7 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
> recv_fifo = &s->rx_fifo;
> }
> while (recv_fifo->num >= 4
> - && stream_can_push(rq->dma, xlnx_zynqmp_qspips_notify, rq))
> + && xlnx_zynqmp_gspips_dma_can_push(rq))
> {
> size_t ret;
> uint32_t num;
> @@ -874,7 +1003,7 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
>
> memcpy(rq->dma_buf, rxd, num);
>
> - ret = stream_push(rq->dma, rq->dma_buf, num, false);
> + ret = xlnx_zynqmp_gspips_dma_push(rq, rq->dma_buf, num, false);
> assert(ret == num);
> xlnx_zynqmp_qspips_check_flush(rq);
> }
> @@ -1127,6 +1256,31 @@ static void xlnx_zynqmp_qspips_write(void *opaque, hwaddr addr,
> case R_GQSPI_GF_SNAPSHOT:
> case R_GQSPI_MOD_ID:
> break;
> + case R_GQSPI_DMA_ADDR:
> + s->regs[R_GQSPI_DMA_ADDR] = value & R_GQSPI_DMA_ADDR_ADDR_MASK;
> + break;
> + case R_GQSPI_DMA_SIZE:
> + s->regs[R_GQSPI_DMA_SIZE] = value & R_GQSPI_DMA_SIZE_SIZE_MASK;
> + break;
> + case R_GQSPI_DMA_STS:
> + s->regs[R_GQSPI_DMA_STS] &= ~(value &
> + R_GQSPI_DMA_STS_DONE_CNT_MASK);
> + break;
> + case R_GQSPI_DMA_I_EN:
> + s->regs[R_GQSPI_DMA_I_EN] = value & R_GQSPI_DMA_INT_MASK;
> + s->regs[R_GQSPI_DMA_I_MASK] &= ~s->regs[R_GQSPI_DMA_I_EN];
> + s->regs[R_GQSPI_DMA_I_DIS] &= ~s->regs[R_GQSPI_DMA_I_EN];
> + break;
> + case R_GQSPI_DMA_I_DIS:
> + s->regs[R_GQSPI_DMA_I_DIS] |= value & R_GQSPI_DMA_INT_MASK;
> + s->regs[R_GQSPI_DMA_I_MASK] |= s->regs[R_GQSPI_DMA_I_DIS];
> + s->regs[R_GQSPI_DMA_I_EN] &= ~s->regs[R_GQSPI_DMA_I_DIS];
> + s->regs[R_GQSPI_DMA_STS] &= 0;
> + break;
> + case R_GQSPI_DMA_ADDR_MSB:
> + s->regs[R_GQSPI_DMA_ADDR_MSB] = value &
> + R_GQSPI_DMA_ADDR_MSB_ADDR_MSB_MASK;
> + break;
> default:
> s->regs[reg] = value;
> break;
> @@ -1353,15 +1507,22 @@ static void xlnx_zynqmp_qspips_realize(DeviceState *dev, Error **errp)
> fifo8_create(&s->rx_fifo_g, xsc->rx_fifo_size);
> fifo8_create(&s->tx_fifo_g, xsc->tx_fifo_size);
> fifo32_create(&s->fifo_g, 32);
> +
> + if (s->dma_mr) {
> + s->dma_as = g_malloc0(sizeof(AddressSpace));
> + address_space_init(s->dma_as, s->dma_mr, NULL);
> + } else {
> + s->dma_as = &address_space_memory;
> + }
> }
>
> static void xlnx_zynqmp_qspips_init(Object *obj)
> {
> - XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(obj);
> + XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(obj);
>
> - object_property_add_link(obj, "stream-connected-dma", TYPE_STREAM_SINK,
> - (Object **)&rq->dma,
> - object_property_allow_set_link,
> + object_property_add_link(obj, "xlnx_zynqmp_qspips_dma", TYPE_MEMORY_REGION,
> + (Object **)&s->dma_mr,
> + qdev_prop_allow_set_link_before_realize,
> OBJ_PROP_LINK_STRONG);
> }
>
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-08 12:44 ` Edgar E. Iglesias
@ 2021-02-08 14:10 ` Bin Meng
2021-02-08 14:34 ` Edgar E. Iglesias
0 siblings, 1 reply; 11+ messages in thread
From: Bin Meng @ 2021-02-08 14:10 UTC (permalink / raw)
To: Edgar E. Iglesias
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng,
qemu-devel@nongnu.org Developers, francisco.iglesias, qemu-arm,
Alistair Francis
Hi Edgar,
On Mon, Feb 8, 2021 at 8:44 PM Edgar E. Iglesias
<edgar.iglesias@gmail.com> wrote:
>
> On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
> > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> >
> > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
> > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
> > crash. This is observed when testing VxWorks 7.
> >
> > Add a basic implementation of QSPI DMA functionality.
> >
> > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> > Signed-off-by: Bin Meng <bin.meng@windriver.com>
>
> + Francisco
>
> Hi,
>
> Like Peter commented on the previous version, the DMA unit is actully separate.
Is it really separate? In the Xilinx ZynqMP datasheet, it's an
integrated DMA unit dedicated for QSPI usage. IIUC, other modules on
the ZynqMP SoC cannot use it to do any DMA transfer. To me this is no
different like a DMA engine in a ethernet controller.
> This module is better modelled by pushing data through the Stream framework
> into the DMA. The DMA model is not upstream but can be found here:
> https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
>
What's the benefit of modeling it using the stream framework?
> Feel free to send a patch to upstream with that model (perhaps changing
> the filename to something more suitable, e.g xlnx-csu-stream-dma.c).
> You can use --author="Edgar E. Iglesias <edgar.iglesias@xilinx.com>".
>
Please, upstream all work Xilinx has done on QEMU. If you think the
DMA support should really be using the Xilinx one, please do the
upstream work as we are not familiar with that implementation.
Currently we are having a hard time testing the upstream QEMU Xilinx
QSPI model with either U-Boot or Linux. We cannot boot anything with
upstream QEMU with the Xilinx ZynqMP model with the limited
information from the internet. Instructions are needed. I also
suggested to Francisco in another thread that the QEMU target guide
for ZynqMP should be added to provide such information.
> The DMA should be mapped to 0xFF0F0800 and IRQ 15.
>
> CC:d Francisco, he's going to publish some smoke-tests for this.
>
Regards,
Bin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-08 14:10 ` Bin Meng
@ 2021-02-08 14:34 ` Edgar E. Iglesias
2021-02-08 14:45 ` Bin Meng
0 siblings, 1 reply; 11+ messages in thread
From: Edgar E. Iglesias @ 2021-02-08 14:34 UTC (permalink / raw)
To: Bin Meng
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng,
qemu-devel@nongnu.org Developers, francisco.iglesias, qemu-arm,
Alistair Francis
[-- Attachment #1: Type: text/plain, Size: 2563 bytes --]
On Mon, 8 Feb 2021, 15:10 Bin Meng, <bmeng.cn@gmail.com> wrote:
> Hi Edgar,
>
> On Mon, Feb 8, 2021 at 8:44 PM Edgar E. Iglesias
> <edgar.iglesias@gmail.com> wrote:
> >
> > On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
> > > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> > >
> > > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
> > > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
> > > crash. This is observed when testing VxWorks 7.
> > >
> > > Add a basic implementation of QSPI DMA functionality.
> > >
> > > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> > > Signed-off-by: Bin Meng <bin.meng@windriver.com>
> >
> > + Francisco
> >
> > Hi,
> >
> > Like Peter commented on the previous version, the DMA unit is actully
> separate.
>
> Is it really separate? In the Xilinx ZynqMP datasheet, it's an
> integrated DMA unit dedicated for QSPI usage. IIUC, other modules on
> the ZynqMP SoC cannot use it to do any DMA transfer. To me this is no
> different like a DMA engine in a ethernet controller.
>
Yes, it's a separate module.
> > This module is better modelled by pushing data through the Stream
> framework
> > into the DMA. The DMA model is not upstream but can be found here:
> > https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
> >
>
> What's the benefit of modeling it using the stream framework?
>
Because it matches real hw and this particular dma exists in various
instances, not only in qspi. We don't want duplicate implementations of the
same dma.
Cheers,
Edgar
> > Feel free to send a patch to upstream with that model (perhaps changing
> > the filename to something more suitable, e.g xlnx-csu-stream-dma.c).
> > You can use --author="Edgar E. Iglesias <edgar.iglesias@xilinx.com>".
> >
>
> Please, upstream all work Xilinx has done on QEMU. If you think the
> DMA support should really be using the Xilinx one, please do the
> upstream work as we are not familiar with that implementation.
>
> Currently we are having a hard time testing the upstream QEMU Xilinx
> QSPI model with either U-Boot or Linux. We cannot boot anything with
> upstream QEMU with the Xilinx ZynqMP model with the limited
> information from the internet. Instructions are needed. I also
> suggested to Francisco in another thread that the QEMU target guide
> for ZynqMP should be added to provide such information.
>
> > The DMA should be mapped to 0xFF0F0800 and IRQ 15.
> >
> > CC:d Francisco, he's going to publish some smoke-tests for this.
> >
>
> Regards,
> Bin
>
[-- Attachment #2: Type: text/html, Size: 4391 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-08 14:34 ` Edgar E. Iglesias
@ 2021-02-08 14:45 ` Bin Meng
2021-02-08 15:17 ` Edgar E. Iglesias
0 siblings, 1 reply; 11+ messages in thread
From: Bin Meng @ 2021-02-08 14:45 UTC (permalink / raw)
To: Edgar E. Iglesias
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng,
qemu-devel@nongnu.org Developers, francisco.iglesias, qemu-arm,
Alistair Francis
Hi Edgar,
On Mon, Feb 8, 2021 at 10:34 PM Edgar E. Iglesias
<edgar.iglesias@gmail.com> wrote:
>
>
>
> On Mon, 8 Feb 2021, 15:10 Bin Meng, <bmeng.cn@gmail.com> wrote:
>>
>> Hi Edgar,
>>
>> On Mon, Feb 8, 2021 at 8:44 PM Edgar E. Iglesias
>> <edgar.iglesias@gmail.com> wrote:
>> >
>> > On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
>> > > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
>> > >
>> > > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
>> > > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
>> > > crash. This is observed when testing VxWorks 7.
>> > >
>> > > Add a basic implementation of QSPI DMA functionality.
>> > >
>> > > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
>> > > Signed-off-by: Bin Meng <bin.meng@windriver.com>
>> >
>> > + Francisco
>> >
>> > Hi,
>> >
>> > Like Peter commented on the previous version, the DMA unit is actully separate.
>>
>> Is it really separate? In the Xilinx ZynqMP datasheet, it's an
>> integrated DMA unit dedicated for QSPI usage. IIUC, other modules on
>> the ZynqMP SoC cannot use it to do any DMA transfer. To me this is no
>> different like a DMA engine in a ethernet controller.
>
>
> Yes, it's a separate module.
>
>>
>> > This module is better modelled by pushing data through the Stream framework
>> > into the DMA. The DMA model is not upstream but can be found here:
>> > https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
>> >
>>
>> What's the benefit of modeling it using the stream framework?
>
>
>
> Because it matches real hw and this particular dma exists in various instances, not only in qspi. We don't want duplicate implementations of the same dma.
>
Would you please share more details, like what other peripherals are
using this same DMA model?
Regards,
Bin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-08 14:45 ` Bin Meng
@ 2021-02-08 15:17 ` Edgar E. Iglesias
2021-02-09 2:30 ` Bin Meng
0 siblings, 1 reply; 11+ messages in thread
From: Edgar E. Iglesias @ 2021-02-08 15:17 UTC (permalink / raw)
To: Bin Meng
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng,
qemu-devel@nongnu.org Developers, Francisco Iglesias, qemu-arm,
Alistair Francis
[-- Attachment #1: Type: text/plain, Size: 2179 bytes --]
On Mon, Feb 8, 2021 at 3:45 PM Bin Meng <bmeng.cn@gmail.com> wrote:
> Hi Edgar,
>
> On Mon, Feb 8, 2021 at 10:34 PM Edgar E. Iglesias
> <edgar.iglesias@gmail.com> wrote:
> >
> >
> >
> > On Mon, 8 Feb 2021, 15:10 Bin Meng, <bmeng.cn@gmail.com> wrote:
> >>
> >> Hi Edgar,
> >>
> >> On Mon, Feb 8, 2021 at 8:44 PM Edgar E. Iglesias
> >> <edgar.iglesias@gmail.com> wrote:
> >> >
> >> > On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
> >> > > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> >> > >
> >> > > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
> >> > > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
> >> > > crash. This is observed when testing VxWorks 7.
> >> > >
> >> > > Add a basic implementation of QSPI DMA functionality.
> >> > >
> >> > > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> >> > > Signed-off-by: Bin Meng <bin.meng@windriver.com>
> >> >
> >> > + Francisco
> >> >
> >> > Hi,
> >> >
> >> > Like Peter commented on the previous version, the DMA unit is actully
> separate.
> >>
> >> Is it really separate? In the Xilinx ZynqMP datasheet, it's an
> >> integrated DMA unit dedicated for QSPI usage. IIUC, other modules on
> >> the ZynqMP SoC cannot use it to do any DMA transfer. To me this is no
> >> different like a DMA engine in a ethernet controller.
> >
> >
> > Yes, it's a separate module.
> >
> >>
> >> > This module is better modelled by pushing data through the Stream
> framework
> >> > into the DMA. The DMA model is not upstream but can be found here:
> >> > https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
> >> >
> >>
> >> What's the benefit of modeling it using the stream framework?
> >
> >
> >
> > Because it matches real hw and this particular dma exists in various
> instances, not only in qspi. We don't want duplicate implementations of the
> same dma.
> >
>
> Would you please share more details, like what other peripherals are
> using this same DMA model?
>
>
It's used by the Crypto blocks (SHA, AES) and by the bitstream programming
blocks on the ZynqMP.
In Versal there's the same plus some additional uses of this DMA...
Best regards,
Edgar
[-- Attachment #2: Type: text/html, Size: 3758 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-08 15:17 ` Edgar E. Iglesias
@ 2021-02-09 2:30 ` Bin Meng
2021-02-10 9:08 ` Bin Meng
0 siblings, 1 reply; 11+ messages in thread
From: Bin Meng @ 2021-02-09 2:30 UTC (permalink / raw)
To: Edgar E. Iglesias
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng,
qemu-devel@nongnu.org Developers, Francisco Iglesias, qemu-arm,
Alistair Francis
Hi Edgar,
On Mon, Feb 8, 2021 at 11:17 PM Edgar E. Iglesias
<edgar.iglesias@gmail.com> wrote:
>
>
>
> On Mon, Feb 8, 2021 at 3:45 PM Bin Meng <bmeng.cn@gmail.com> wrote:
>>
>> Hi Edgar,
>>
>> On Mon, Feb 8, 2021 at 10:34 PM Edgar E. Iglesias
>> <edgar.iglesias@gmail.com> wrote:
>> >
>> >
>> >
>> > On Mon, 8 Feb 2021, 15:10 Bin Meng, <bmeng.cn@gmail.com> wrote:
>> >>
>> >> Hi Edgar,
>> >>
>> >> On Mon, Feb 8, 2021 at 8:44 PM Edgar E. Iglesias
>> >> <edgar.iglesias@gmail.com> wrote:
>> >> >
>> >> > On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
>> >> > > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
>> >> > >
>> >> > > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
>> >> > > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
>> >> > > crash. This is observed when testing VxWorks 7.
>> >> > >
>> >> > > Add a basic implementation of QSPI DMA functionality.
>> >> > >
>> >> > > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
>> >> > > Signed-off-by: Bin Meng <bin.meng@windriver.com>
>> >> >
>> >> > + Francisco
>> >> >
>> >> > Hi,
>> >> >
>> >> > Like Peter commented on the previous version, the DMA unit is actully separate.
>> >>
>> >> Is it really separate? In the Xilinx ZynqMP datasheet, it's an
>> >> integrated DMA unit dedicated for QSPI usage. IIUC, other modules on
>> >> the ZynqMP SoC cannot use it to do any DMA transfer. To me this is no
>> >> different like a DMA engine in a ethernet controller.
>> >
>> >
>> > Yes, it's a separate module.
>> >
>> >>
>> >> > This module is better modelled by pushing data through the Stream framework
>> >> > into the DMA. The DMA model is not upstream but can be found here:
>> >> > https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
>> >> >
>> >>
>> >> What's the benefit of modeling it using the stream framework?
>> >
>> >
>> >
>> > Because it matches real hw and this particular dma exists in various instances, not only in qspi. We don't want duplicate implementations of the same dma.
>> >
>>
>> Would you please share more details, like what other peripherals are
>> using this same DMA model?
>>
>
> It's used by the Crypto blocks (SHA, AES) and by the bitstream programming blocks on the ZynqMP.
> In Versal there's the same plus some additional uses of this DMA...
Sigh, it's not obvious from the ZynqMP datasheet. Indeed the crypto
blocks seem to be using the same IP that QSPI uses for its DMA mode.
With that additional information, I agree modeling the DMA as a
separate model makes sense.
Will investigate the Xilinx fork, and report back.
Regards,
Bin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-09 2:30 ` Bin Meng
@ 2021-02-10 9:08 ` Bin Meng
2021-02-10 10:04 ` Edgar E. Iglesias
0 siblings, 1 reply; 11+ messages in thread
From: Bin Meng @ 2021-02-10 9:08 UTC (permalink / raw)
To: Edgar E. Iglesias
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng,
qemu-devel@nongnu.org Developers, Francisco Iglesias, qemu-arm,
Alistair Francis
On Tue, Feb 9, 2021 at 10:30 AM Bin Meng <bmeng.cn@gmail.com> wrote:
>
> Hi Edgar,
>
> On Mon, Feb 8, 2021 at 11:17 PM Edgar E. Iglesias
> <edgar.iglesias@gmail.com> wrote:
> >
> >
> >
> > On Mon, Feb 8, 2021 at 3:45 PM Bin Meng <bmeng.cn@gmail.com> wrote:
> >>
> >> Hi Edgar,
> >>
> >> On Mon, Feb 8, 2021 at 10:34 PM Edgar E. Iglesias
> >> <edgar.iglesias@gmail.com> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, 8 Feb 2021, 15:10 Bin Meng, <bmeng.cn@gmail.com> wrote:
> >> >>
> >> >> Hi Edgar,
> >> >>
> >> >> On Mon, Feb 8, 2021 at 8:44 PM Edgar E. Iglesias
> >> >> <edgar.iglesias@gmail.com> wrote:
> >> >> >
> >> >> > On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
> >> >> > > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> >> >> > >
> >> >> > > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
> >> >> > > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
> >> >> > > crash. This is observed when testing VxWorks 7.
> >> >> > >
> >> >> > > Add a basic implementation of QSPI DMA functionality.
> >> >> > >
> >> >> > > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> >> >> > > Signed-off-by: Bin Meng <bin.meng@windriver.com>
> >> >> >
> >> >> > + Francisco
> >> >> >
> >> >> > Hi,
> >> >> >
> >> >> > Like Peter commented on the previous version, the DMA unit is actully separate.
> >> >>
> >> >> Is it really separate? In the Xilinx ZynqMP datasheet, it's an
> >> >> integrated DMA unit dedicated for QSPI usage. IIUC, other modules on
> >> >> the ZynqMP SoC cannot use it to do any DMA transfer. To me this is no
> >> >> different like a DMA engine in a ethernet controller.
> >> >
> >> >
> >> > Yes, it's a separate module.
> >> >
> >> >>
> >> >> > This module is better modelled by pushing data through the Stream framework
> >> >> > into the DMA. The DMA model is not upstream but can be found here:
> >> >> > https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
> >> >> >
> >> >>
> >> >> What's the benefit of modeling it using the stream framework?
> >> >
> >> >
> >> >
> >> > Because it matches real hw and this particular dma exists in various instances, not only in qspi. We don't want duplicate implementations of the same dma.
> >> >
> >>
> >> Would you please share more details, like what other peripherals are
> >> using this same DMA model?
> >>
> >
> > It's used by the Crypto blocks (SHA, AES) and by the bitstream programming blocks on the ZynqMP.
> > In Versal there's the same plus some additional uses of this DMA...
>
> Sigh, it's not obvious from the ZynqMP datasheet. Indeed the crypto
> blocks seem to be using the same IP that QSPI uses for its DMA mode.
> With that additional information, I agree modeling the DMA as a
> separate model makes sense.
>
> Will investigate the Xilinx fork, and report back.
Unfortunately the Xilinx fork of QEMU does not boot VxWorks. It looks
like the fork has quite a lot of difference from the upstream QEMU.
For example, the fork has a new machine name for ZynqMP which does not
exist in the upstream. It seems quite a lot has not been upstreamed
yet, sigh.
The CSU DMA model in the Xilinx fork seems to be quite complicated and
has lots of functionalities. However right now our goal is to
implement a minimum model that could be used to work with the GQSPI
model to make the QSPI DMA functionality work.
We implemented a basic CSU DMA model based on the Xilinx fork, and
will send it as v3 soon.
Regards,
Bin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
2021-02-10 9:08 ` Bin Meng
@ 2021-02-10 10:04 ` Edgar E. Iglesias
0 siblings, 0 replies; 11+ messages in thread
From: Edgar E. Iglesias @ 2021-02-10 10:04 UTC (permalink / raw)
To: Bin Meng
Cc: Peter Maydell, Xuzhou Cheng, Bin Meng,
qemu-devel@nongnu.org Developers, Francisco Iglesias, qemu-arm,
Alistair Francis
On Wed, Feb 10, 2021 at 05:08:01PM +0800, Bin Meng wrote:
> On Tue, Feb 9, 2021 at 10:30 AM Bin Meng <bmeng.cn@gmail.com> wrote:
> >
> > Hi Edgar,
> >
> > On Mon, Feb 8, 2021 at 11:17 PM Edgar E. Iglesias
> > <edgar.iglesias@gmail.com> wrote:
> > >
> > >
> > >
> > > On Mon, Feb 8, 2021 at 3:45 PM Bin Meng <bmeng.cn@gmail.com> wrote:
> > >>
> > >> Hi Edgar,
> > >>
> > >> On Mon, Feb 8, 2021 at 10:34 PM Edgar E. Iglesias
> > >> <edgar.iglesias@gmail.com> wrote:
> > >> >
> > >> >
> > >> >
> > >> > On Mon, 8 Feb 2021, 15:10 Bin Meng, <bmeng.cn@gmail.com> wrote:
> > >> >>
> > >> >> Hi Edgar,
> > >> >>
> > >> >> On Mon, Feb 8, 2021 at 8:44 PM Edgar E. Iglesias
> > >> >> <edgar.iglesias@gmail.com> wrote:
> > >> >> >
> > >> >> > On Mon, Feb 08, 2021 at 01:25:24PM +0800, Bin Meng wrote:
> > >> >> > > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> > >> >> > >
> > >> >> > > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
> > >> >> > > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
> > >> >> > > crash. This is observed when testing VxWorks 7.
> > >> >> > >
> > >> >> > > Add a basic implementation of QSPI DMA functionality.
> > >> >> > >
> > >> >> > > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> > >> >> > > Signed-off-by: Bin Meng <bin.meng@windriver.com>
> > >> >> >
> > >> >> > + Francisco
> > >> >> >
> > >> >> > Hi,
> > >> >> >
> > >> >> > Like Peter commented on the previous version, the DMA unit is actully separate.
> > >> >>
> > >> >> Is it really separate? In the Xilinx ZynqMP datasheet, it's an
> > >> >> integrated DMA unit dedicated for QSPI usage. IIUC, other modules on
> > >> >> the ZynqMP SoC cannot use it to do any DMA transfer. To me this is no
> > >> >> different like a DMA engine in a ethernet controller.
> > >> >
> > >> >
> > >> > Yes, it's a separate module.
> > >> >
> > >> >>
> > >> >> > This module is better modelled by pushing data through the Stream framework
> > >> >> > into the DMA. The DMA model is not upstream but can be found here:
> > >> >> > https://github.com/Xilinx/qemu/blob/master/hw/dma/csu_stream_dma.c
> > >> >> >
> > >> >>
> > >> >> What's the benefit of modeling it using the stream framework?
> > >> >
> > >> >
> > >> >
> > >> > Because it matches real hw and this particular dma exists in various instances, not only in qspi. We don't want duplicate implementations of the same dma.
> > >> >
> > >>
> > >> Would you please share more details, like what other peripherals are
> > >> using this same DMA model?
> > >>
> > >
> > > It's used by the Crypto blocks (SHA, AES) and by the bitstream programming blocks on the ZynqMP.
> > > In Versal there's the same plus some additional uses of this DMA...
> >
> > Sigh, it's not obvious from the ZynqMP datasheet. Indeed the crypto
> > blocks seem to be using the same IP that QSPI uses for its DMA mode.
> > With that additional information, I agree modeling the DMA as a
> > separate model makes sense.
> >
> > Will investigate the Xilinx fork, and report back.
>
> Unfortunately the Xilinx fork of QEMU does not boot VxWorks. It looks
> like the fork has quite a lot of difference from the upstream QEMU.
> For example, the fork has a new machine name for ZynqMP which does not
> exist in the upstream. It seems quite a lot has not been upstreamed
> yet, sigh.
>
> The CSU DMA model in the Xilinx fork seems to be quite complicated and
> has lots of functionalities. However right now our goal is to
> implement a minimum model that could be used to work with the GQSPI
> model to make the QSPI DMA functionality work.
> We implemented a basic CSU DMA model based on the Xilinx fork, and
> will send it as v3 soon.
>
We've prepared a patch with the QSPI DMA support using the complete
DMA model. We'll send that out soon. It's better if you base your
work on that.
Cheers,
Edgar
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-02-10 10:05 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-02-08 5:25 [PATCH v2 0/2] ZynqMP QSPI supports SPI transfer using DMA mode, but currently this Bin Meng
2021-02-08 5:25 ` [PATCH v2 1/2] hw/ssi: xilinx_spips: Clean up coding convention issues Bin Meng
2021-02-08 5:25 ` [PATCH v2 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support Bin Meng
2021-02-08 12:44 ` Edgar E. Iglesias
2021-02-08 14:10 ` Bin Meng
2021-02-08 14:34 ` Edgar E. Iglesias
2021-02-08 14:45 ` Bin Meng
2021-02-08 15:17 ` Edgar E. Iglesias
2021-02-09 2:30 ` Bin Meng
2021-02-10 9:08 ` Bin Meng
2021-02-10 10:04 ` Edgar E. Iglesias
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).