* [RFC PATCH 0/2] spi: spi-mem: Add a direct mapping API
@ 2018-06-01 14:36 Boris Brezillon
2018-06-01 14:36 ` [RFC PATCH 1/2] spi: spi-mem: Add a new API to support direct mapping Boris Brezillon
2018-06-01 14:36 ` [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances Boris Brezillon
0 siblings, 2 replies; 8+ messages in thread
From: Boris Brezillon @ 2018-06-01 14:36 UTC (permalink / raw)
To: David Woodhouse, Brian Norris, Boris Brezillon, Marek Vasut,
Richard Weinberger, linux-mtd, Mark Brown, linux-spi
Cc: Cyrille Pitchen, Vignesh R, Thomas Petazzoni
Hello,
I know this topic has come up several times during the development of
the spi-mem layer, so I'm finally proposing something to expose this
feature to SPI mem drivers.
Right now it's a rather simple implementation where only on spi_mem_op
operation can be attached to a direct mapping, meaning that write
access to SPI NOR devices will still require manually sending the WR_EN
operation before using the direct mapping, but it should still improve
perfs overall, and should match how most SPI controller drivers describe
direct mappings (a SPI memory operation attached to the range of memory
that you want to directly map in the CPU address space).
Feel free to comment on this implementation.
Thanks,
Boris
Boris Brezillon (2):
spi: spi-mem: Add a new API to support direct mapping
mtd: m25p80: Use the SPI mem direct API to possibly improve
performances
drivers/mtd/devices/m25p80.c | 149 ++++++++++++++----------
drivers/spi/spi-mem.c | 267 +++++++++++++++++++++++++++++++++++++++----
include/linux/spi/spi-mem.h | 72 ++++++++++++
3 files changed, 407 insertions(+), 81 deletions(-)
--
2.14.1
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC PATCH 1/2] spi: spi-mem: Add a new API to support direct mapping
2018-06-01 14:36 [RFC PATCH 0/2] spi: spi-mem: Add a direct mapping API Boris Brezillon
@ 2018-06-01 14:36 ` Boris Brezillon
2018-06-07 15:01 ` Miquel Raynal
2018-06-01 14:36 ` [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances Boris Brezillon
1 sibling, 1 reply; 8+ messages in thread
From: Boris Brezillon @ 2018-06-01 14:36 UTC (permalink / raw)
To: David Woodhouse, Brian Norris, Boris Brezillon, Marek Vasut,
Richard Weinberger, linux-mtd, Mark Brown, linux-spi
Cc: Cyrille Pitchen, Vignesh R, Thomas Petazzoni
Most modern QSPI controllers support can directly map a SPI memory (or
a portion of the SPI memory) in the CPU address space. Most of the time
this brings significant performance improvements as it automates the
whole process of sending SPI memory operations every time a new region
is accessed.
This new API allow SPI memory driver to create direct mappings and then
use them to access the memory instead of using spi_mem_exec_op().
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
---
drivers/spi/spi-mem.c | 267 ++++++++++++++++++++++++++++++++++++++++----
include/linux/spi/spi-mem.h | 72 ++++++++++++
2 files changed, 318 insertions(+), 21 deletions(-)
diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
index 990770dfa5cf..90ea0c5263a7 100644
--- a/drivers/spi/spi-mem.c
+++ b/drivers/spi/spi-mem.c
@@ -175,6 +175,44 @@ bool spi_mem_supports_op(struct spi_mem *mem, const struct spi_mem_op *op)
}
EXPORT_SYMBOL_GPL(spi_mem_supports_op);
+static int spi_mem_access_start(struct spi_mem *mem)
+{
+ struct spi_controller *ctlr = mem->spi->controller;
+
+ /*
+ * Flush the message queue before executing our SPI memory
+ * operation to prevent preemption of regular SPI transfers.
+ */
+ spi_flush_queue(ctlr);
+
+ if (ctlr->auto_runtime_pm) {
+ int ret;
+
+ ret = pm_runtime_get_sync(ctlr->dev.parent);
+ if (ret < 0) {
+ dev_err(&ctlr->dev, "Failed to power device: %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ mutex_lock(&ctlr->bus_lock_mutex);
+ mutex_lock(&ctlr->io_mutex);
+
+ return 0;
+}
+
+static void spi_mem_access_end(struct spi_mem *mem)
+{
+ struct spi_controller *ctlr = mem->spi->controller;
+
+ mutex_unlock(&ctlr->io_mutex);
+ mutex_unlock(&ctlr->bus_lock_mutex);
+
+ if (ctlr->auto_runtime_pm)
+ pm_runtime_put(ctlr->dev.parent);
+}
+
/**
* spi_mem_exec_op() - Execute a memory operation
* @mem: the SPI memory
@@ -200,30 +238,13 @@ int spi_mem_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
return -ENOTSUPP;
if (ctlr->mem_ops) {
- /*
- * Flush the message queue before executing our SPI memory
- * operation to prevent preemption of regular SPI transfers.
- */
- spi_flush_queue(ctlr);
-
- if (ctlr->auto_runtime_pm) {
- ret = pm_runtime_get_sync(ctlr->dev.parent);
- if (ret < 0) {
- dev_err(&ctlr->dev,
- "Failed to power device: %d\n",
- ret);
- return ret;
- }
- }
+ ret = spi_mem_access_start(mem);
+ if (ret)
+ return ret;
- mutex_lock(&ctlr->bus_lock_mutex);
- mutex_lock(&ctlr->io_mutex);
ret = ctlr->mem_ops->exec_op(mem, op);
- mutex_unlock(&ctlr->io_mutex);
- mutex_unlock(&ctlr->bus_lock_mutex);
- if (ctlr->auto_runtime_pm)
- pm_runtime_put(ctlr->dev.parent);
+ spi_mem_access_end(mem);
/*
* Some controllers only optimize specific paths (typically the
@@ -336,6 +357,210 @@ int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
}
EXPORT_SYMBOL_GPL(spi_mem_adjust_op_size);
+static ssize_t spi_mem_no_dirmap_read(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, void *buf)
+{
+ struct spi_mem_op op = desc->info.op_tmpl;
+ size_t read = 0;
+ int ret;
+
+ while (read < len) {
+ op.addr.val = desc->info.offset + offs + read;
+ op.data.buf.in = buf + read;
+ op.data.nbytes = len - read;
+ ret = spi_mem_adjust_op_size(desc->mem, &op);
+ if (ret)
+ return ret;
+
+ ret = spi_mem_exec_op(desc->mem, &op);
+ if (ret)
+ return ret;
+
+ read += op.data.nbytes;
+ }
+
+ return read;
+}
+
+static ssize_t spi_mem_no_dirmap_write(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, const void *buf)
+{
+ struct spi_mem_op op = desc->info.op_tmpl;
+ size_t written = 0;
+ int ret;
+
+ while (written < len) {
+ op.addr.val = desc->info.offset + offs + written;
+ op.data.buf.out = buf + written;
+ op.data.nbytes = len - written;
+ ret = spi_mem_adjust_op_size(desc->mem, &op);
+ if (ret)
+ return ret;
+
+ ret = spi_mem_exec_op(desc->mem, &op);
+ if (ret)
+ return ret;
+
+ written += op.data.nbytes;
+ }
+
+ return written;
+}
+
+/**
+ * spi_mem_dirmap_create() - Create a direct mapping descriptor
+ * @mem: SPI mem device this direct mapping should be created for
+ * @info: direct mapping information
+ *
+ * This function is creating a direct mapping descriptor which can then be used
+ * to access the memory using spi_mem_dirmap_read() or spi_mem_dirmap_write().
+ * If the SPI controller driver does not support direct mapping, this function
+ * fallback to an implementation using spi_mem_exec_op(), so that the caller
+ * doesn't have to bother implementing a fallback on his own.
+ *
+ * Return: a valid pointer in case of success, and ERR_PTR() otherwise.
+ */
+struct spi_mem_dirmap_desc *
+spi_mem_dirmap_create(struct spi_mem *mem,
+ const struct spi_mem_dirmap_info *info)
+{
+ struct spi_controller *ctlr = mem->spi->controller;
+ struct spi_mem_dirmap_desc *desc;
+ int ret = -ENOTSUPP;
+
+ desc = kzalloc(sizeof(*desc), GFP_KERNEL);
+ if (!desc)
+ return ERR_PTR(-ENOMEM);
+
+ desc->mem = mem;
+ desc->info = *info;
+ if (ctlr->mem_ops && ctlr->mem_ops->dirmap_create)
+ ret = ctlr->mem_ops->dirmap_create(desc);
+
+ if (ret) {
+ desc->nodirmap = true;
+ if (!spi_mem_supports_op(desc->mem, &desc->info.op_tmpl))
+ ret = -ENOTSUPP;
+ else
+ ret = 0;
+ }
+
+ if (ret) {
+ kfree(desc);
+ return ERR_PTR(ret);
+ }
+
+ return desc;
+}
+EXPORT_SYMBOL_GPL(spi_mem_dirmap_create);
+
+/**
+ * spi_mem_dirmap_destroy() - Destroy a direct mapping descriptor
+ * @desc: the direct mapping descriptor to destroy
+ * @info: direct mapping information
+ *
+ * This function destroys a direct mapping descriptor previously created by
+ * spi_mem_dirmap_create().
+ */
+void spi_mem_dirmap_destroy(struct spi_mem_dirmap_desc *desc)
+{
+ struct spi_controller *ctlr = desc->mem->spi->controller;
+
+ if (!desc->nodirmap && ctlr->mem_ops && ctlr->mem_ops->dirmap_destroy)
+ ctlr->mem_ops->dirmap_destroy(desc);
+}
+EXPORT_SYMBOL_GPL(spi_mem_dirmap_destroy);
+
+/**
+ * spi_mem_dirmap_dirmap_read() - Read data through a direct mapping
+ * @desc: direct mapping descriptor
+ * @offs: offset to start reading from. Note that this is not an absolute
+ * offset, but the offset within the direct mapping which already has
+ * its own offset
+ * @len: length in bytes
+ * @buf: destination buffer. This buffer must be DMA-able
+ *
+ * This function reads data from a memory device using a direct mapping
+ * previously instantiated with spi_mem_dirmap_create().
+ *
+ * Return: the amount of data read from the memory device or a negative error
+ * code.
+ */
+ssize_t spi_mem_dirmap_read(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, void *buf)
+{
+ struct spi_controller *ctlr = desc->mem->spi->controller;
+ ssize_t ret;
+
+ if (desc->info.op_tmpl.data.dir != SPI_MEM_DATA_IN)
+ return -EINVAL;
+
+ if (!len)
+ return 0;
+
+ if (desc->nodirmap) {
+ ret = spi_mem_no_dirmap_read(desc, offs, len, buf);
+ } else if (ctlr->mem_ops && ctlr->mem_ops->dirmap_read) {
+ ret = spi_mem_access_start(desc->mem);
+ if (ret)
+ return ret;
+
+ ret = ctlr->mem_ops->dirmap_read(desc, offs, len, buf);
+
+ spi_mem_access_end(desc->mem);
+ } else {
+ ret = -ENOTSUPP;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(spi_mem_dirmap_read);
+
+/**
+ * spi_mem_dirmap_dirmap_write() - Write data through a direct mapping
+ * @desc: direct mapping descriptor
+ * @offs: offset to start writing from. Note that this is not an absolute
+ * offset, but the offset within the direct mapping which already has
+ * its own offset
+ * @len: length in bytes
+ * @buf: source buffer. This buffer must be DMA-able
+ *
+ * This function writes data to a memory device using a direct mapping
+ * previously instantiated with spi_mem_dirmap_create().
+ *
+ * Return: the amount of data written to the memory device or a negative error
+ * code.
+ */
+ssize_t spi_mem_dirmap_write(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, const void *buf)
+{
+ struct spi_controller *ctlr = desc->mem->spi->controller;
+ ssize_t ret;
+
+ if (desc->info.op_tmpl.data.dir != SPI_MEM_DATA_OUT)
+ return -EINVAL;
+
+ if (!len)
+ return 0;
+
+ if (desc->nodirmap) {
+ ret = spi_mem_no_dirmap_write(desc, offs, len, buf);
+ } else if (ctlr->mem_ops && ctlr->mem_ops->dirmap_write) {
+ ret = spi_mem_access_start(desc->mem);
+ if (ret)
+ return ret;
+
+ ret = ctlr->mem_ops->dirmap_write(desc, offs, len, buf);
+
+ spi_mem_access_end(desc->mem);
+ } else {
+ ret = -ENOTSUPP;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(spi_mem_dirmap_write);
+
static inline struct spi_mem_driver *to_spi_mem_drv(struct device_driver *drv)
{
return container_of(drv, struct spi_mem_driver, spidrv.driver);
diff --git a/include/linux/spi/spi-mem.h b/include/linux/spi/spi-mem.h
index 4fa34a227a0f..da67b75a19f1 100644
--- a/include/linux/spi/spi-mem.h
+++ b/include/linux/spi/spi-mem.h
@@ -121,6 +121,49 @@ struct spi_mem_op {
.data = __data, \
}
+/**
+ * struct spi_mem_dirmap_info - Direct mapping information
+ * @op_tmpl: operation template that should be used by the direct mapping when
+ * the memory device is accessed
+ * @offset: absolute offset this direct mapping is pointing to
+ * @length: length in byte of this direct mapping
+ *
+ * These information are used by the controller specific implementation to know
+ * the portion of memory that is directly mapped and the spi_mem_op that should
+ * be used to access the device.
+ * A direct mapping is only valid for one direction (read or write) and this
+ * direction is directly encoded in the ->op_tmpl.data.dir field.
+ */
+struct spi_mem_dirmap_info {
+ struct spi_mem_op op_tmpl;
+ u64 offset;
+ u64 length;
+};
+
+/**
+ * struct spi_mem_dirmap_desc - Direct mapping descriptor
+ * @mem: the SPI memory device this direct mapping is attached to
+ * @info: information passed at direct mapping creation time
+ * @nodirmap: set to true if the SPI controller does not implement
+ * ->mem_ops->dirmap_create() or when this function returned an
+ * error. If dirmap is true, all spi_mem_dirmap_{read,write}()
+ * calls will use spi_mem_exec_op() to access the memory. This is a
+ * degraded mode that allows higher spi_mem drivers to use the same
+ * code no matter if the controller supports direct mapping or not
+ * @priv: field pointing to controller specific data
+ *
+ * Common part of a direct mapping descriptor. This object is created by
+ * spi_mem_dirmap_create() and controller implementation of ->create_dirmap()
+ * can create/attach direct mapping resources to the descriptor in the ->priv
+ * field.
+ */
+struct spi_mem_dirmap_desc {
+ struct spi_mem *mem;
+ struct spi_mem_dirmap_info info;
+ bool nodirmap;
+ void *priv;
+};
+
/**
* struct spi_mem - describes a SPI memory device
* @spi: the underlying SPI device
@@ -167,10 +210,24 @@ static inline void *spi_mem_get_drvdata(struct spi_mem *mem)
* limitations)
* @supports_op: check if an operation is supported by the controller
* @exec_op: execute a SPI memory operation
+ * @dirmap_create: create a direct mapping descriptor that can later be used to
+ * access the memory device. This method is optional
+ * @dirmap_destroy: destroy a memory descriptor previous created by
+ * ->dirmap_create()
+ * @dirmap_read: read data from the memory device using the direct mapping
+ * created by ->dirmap_create().
+ * @dirmap_write: write data to the memory device using the direct mapping
+ * created by ->dirmap_create().
*
* This interface should be implemented by SPI controllers providing an
* high-level interface to execute SPI memory operation, which is usually the
* case for QSPI controllers.
+ *
+ * Note on ->dirmap_{read,write}(): drivers should avoid accessing the direct
+ * mapping from the CPU because doing that can stall the CPU waiting for the
+ * SPI mem transaction to finish, and this will make real-time maintainers
+ * unhappy and might make your system less reactive. Instead, drivers should
+ * use DMA to access this direct mapping.
*/
struct spi_controller_mem_ops {
int (*adjust_op_size)(struct spi_mem *mem, struct spi_mem_op *op);
@@ -178,6 +235,12 @@ struct spi_controller_mem_ops {
const struct spi_mem_op *op);
int (*exec_op)(struct spi_mem *mem,
const struct spi_mem_op *op);
+ int (*dirmap_create)(struct spi_mem_dirmap_desc *desc);
+ void (*dirmap_destroy)(struct spi_mem_dirmap_desc *desc);
+ ssize_t (*dirmap_read)(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, void *buf);
+ ssize_t (*dirmap_write)(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, const void *buf);
};
/**
@@ -236,6 +299,15 @@ bool spi_mem_supports_op(struct spi_mem *mem,
int spi_mem_exec_op(struct spi_mem *mem,
const struct spi_mem_op *op);
+struct spi_mem_dirmap_desc *
+spi_mem_dirmap_create(struct spi_mem *mem,
+ const struct spi_mem_dirmap_info *info);
+void spi_mem_dirmap_destroy(struct spi_mem_dirmap_desc *desc);
+ssize_t spi_mem_dirmap_read(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, void *buf);
+ssize_t spi_mem_dirmap_write(struct spi_mem_dirmap_desc *desc,
+ u64 offs, size_t len, const void *buf);
+
int spi_mem_driver_register_with_owner(struct spi_mem_driver *drv,
struct module *owner);
--
2.14.1
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances
2018-06-01 14:36 [RFC PATCH 0/2] spi: spi-mem: Add a direct mapping API Boris Brezillon
2018-06-01 14:36 ` [RFC PATCH 1/2] spi: spi-mem: Add a new API to support direct mapping Boris Brezillon
@ 2018-06-01 14:36 ` Boris Brezillon
2018-06-07 15:08 ` Miquel Raynal
1 sibling, 1 reply; 8+ messages in thread
From: Boris Brezillon @ 2018-06-01 14:36 UTC (permalink / raw)
To: David Woodhouse, Brian Norris, Boris Brezillon, Marek Vasut,
Richard Weinberger, linux-mtd, Mark Brown, linux-spi
Cc: Cyrille Pitchen, Vignesh R, Thomas Petazzoni
Make use of the SPI mem direct mapping API.
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
---
drivers/mtd/devices/m25p80.c | 149 ++++++++++++++++++++++++++-----------------
1 file changed, 89 insertions(+), 60 deletions(-)
diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
index 1dd5f0420b5a..285fc94118ae 100644
--- a/drivers/mtd/devices/m25p80.c
+++ b/drivers/mtd/devices/m25p80.c
@@ -31,6 +31,10 @@
struct m25p {
struct spi_mem *spimem;
struct spi_nor spi_nor;
+ struct {
+ struct spi_mem_dirmap_desc *write;
+ struct spi_mem_dirmap_desc *read;
+ } dirmap;
};
static int m25p80_read_reg(struct spi_nor *nor, u8 code, u8 *val, int len)
@@ -65,38 +69,8 @@ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
const u_char *buf)
{
struct m25p *flash = nor->priv;
- struct spi_mem_op op =
- SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 1),
- SPI_MEM_OP_ADDR(nor->addr_width, to, 1),
- SPI_MEM_OP_NO_DUMMY,
- SPI_MEM_OP_DATA_OUT(len, buf, 1));
- size_t remaining = len;
- int ret;
-
- /* get transfer protocols. */
- op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->write_proto);
- op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->write_proto);
- op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->write_proto);
-
- if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second)
- op.addr.nbytes = 0;
-
- while (remaining) {
- op.data.nbytes = remaining < UINT_MAX ? remaining : UINT_MAX;
- ret = spi_mem_adjust_op_size(flash->spimem, &op);
- if (ret)
- return ret;
- ret = spi_mem_exec_op(flash->spimem, &op);
- if (ret)
- return ret;
-
- op.addr.val += op.data.nbytes;
- remaining -= op.data.nbytes;
- op.data.buf.out += op.data.nbytes;
- }
-
- return len;
+ return spi_mem_dirmap_write(flash->dirmap.write, to, len, buf);
}
/*
@@ -107,39 +81,66 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
u_char *buf)
{
struct m25p *flash = nor->priv;
- struct spi_mem_op op =
- SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 1),
- SPI_MEM_OP_ADDR(nor->addr_width, from, 1),
- SPI_MEM_OP_DUMMY(nor->read_dummy, 1),
- SPI_MEM_OP_DATA_IN(len, buf, 1));
- size_t remaining = len;
- int ret;
+
+ return spi_mem_dirmap_read(flash->dirmap.read, from, len, buf);
+}
+
+static int m25p_create_write_dirmap(struct m25p *flash)
+{
+ struct spi_nor *nor = &flash->spi_nor;
+ struct spi_mem_dirmap_info info = {
+ .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 1),
+ SPI_MEM_OP_ADDR(nor->addr_width, 0, 1),
+ SPI_MEM_OP_NO_DUMMY,
+ SPI_MEM_OP_DATA_OUT(0, NULL, 1)),
+ .offset = 0,
+ .length = flash->spi_nor.mtd.size,
+ };
+ struct spi_mem_op *op = &info.op_tmpl;
/* get transfer protocols. */
- op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->read_proto);
- op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->read_proto);
- op.dummy.buswidth = op.addr.buswidth;
- op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->read_proto);
+ op->cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->write_proto);
+ op->addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->write_proto);
+ op->dummy.buswidth = op->addr.buswidth;
+ op->data.buswidth = spi_nor_get_protocol_data_nbits(nor->write_proto);
- /* convert the dummy cycles to the number of bytes */
- op.dummy.nbytes = (nor->read_dummy * op.dummy.buswidth) / 8;
+ if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second)
+ op->addr.nbytes = 0;
- while (remaining) {
- op.data.nbytes = remaining < UINT_MAX ? remaining : UINT_MAX;
- ret = spi_mem_adjust_op_size(flash->spimem, &op);
- if (ret)
- return ret;
+ flash->dirmap.write = spi_mem_dirmap_create(flash->spimem, &info);
+ if (IS_ERR(flash->dirmap.write))
+ return PTR_ERR(flash->dirmap.write);
- ret = spi_mem_exec_op(flash->spimem, &op);
- if (ret)
- return ret;
+ return 0;
+}
- op.addr.val += op.data.nbytes;
- remaining -= op.data.nbytes;
- op.data.buf.in += op.data.nbytes;
- }
+static int m25p_create_read_dirmap(struct m25p *flash)
+{
+ struct spi_nor *nor = &flash->spi_nor;
+ struct spi_mem_dirmap_info info = {
+ .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 1),
+ SPI_MEM_OP_ADDR(nor->addr_width, 0, 1),
+ SPI_MEM_OP_DUMMY(nor->read_dummy, 1),
+ SPI_MEM_OP_DATA_IN(0, NULL, 1)),
+ .offset = 0,
+ .length = flash->spi_nor.mtd.size,
+ };
+ struct spi_mem_op *op = &info.op_tmpl;
- return len;
+ /* get transfer protocols. */
+ op->cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->read_proto);
+ op->addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->read_proto);
+ op->dummy.buswidth = op->addr.buswidth;
+ op->data.buswidth = spi_nor_get_protocol_data_nbits(nor->read_proto);
+
+ /* convert the dummy cycles to the number of bytes */
+ op->dummy.nbytes = (nor->read_dummy * op->dummy.buswidth) / 8;
+
+ flash->dirmap.read = spi_mem_dirmap_create(flash->spimem, &info);
+ if (IS_ERR(flash->dirmap.read))
+ return PTR_ERR(flash->dirmap.read);
+
+ return 0;
}
/*
@@ -215,19 +216,47 @@ static int m25p_probe(struct spi_mem *spimem)
if (ret)
return ret;
- return mtd_device_register(&nor->mtd, data ? data->parts : NULL,
- data ? data->nr_parts : 0);
+ ret = m25p_create_write_dirmap(flash);
+ if (ret)
+ return ret;
+
+ ret = m25p_create_read_dirmap(flash);
+ if (ret)
+ goto err_destroy_write_dirmap;
+
+ ret = mtd_device_register(&nor->mtd, data ? data->parts : NULL,
+ data ? data->nr_parts : 0);
+ if (ret)
+ goto err_destroy_read_dirmap;
+
+ return 0;
+
+err_destroy_read_dirmap:
+ spi_mem_dirmap_destroy(flash->dirmap.read);
+
+err_destroy_write_dirmap:
+ spi_mem_dirmap_destroy(flash->dirmap.write);
+
+ return ret;
}
static int m25p_remove(struct spi_mem *spimem)
{
struct m25p *flash = spi_mem_get_drvdata(spimem);
+ int ret;
spi_nor_restore(&flash->spi_nor);
/* Clean up MTD stuff. */
- return mtd_device_unregister(&flash->spi_nor.mtd);
+ ret = mtd_device_unregister(&flash->spi_nor.mtd);
+ if (ret)
+ return ret;
+
+ spi_mem_dirmap_destroy(flash->dirmap.read);
+ spi_mem_dirmap_destroy(flash->dirmap.write);
+
+ return 0;
}
static void m25p_shutdown(struct spi_mem *spimem)
--
2.14.1
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 1/2] spi: spi-mem: Add a new API to support direct mapping
2018-06-01 14:36 ` [RFC PATCH 1/2] spi: spi-mem: Add a new API to support direct mapping Boris Brezillon
@ 2018-06-07 15:01 ` Miquel Raynal
2018-06-07 15:16 ` Boris Brezillon
0 siblings, 1 reply; 8+ messages in thread
From: Miquel Raynal @ 2018-06-07 15:01 UTC (permalink / raw)
To: Boris Brezillon
Cc: Vignesh R, Richard Weinberger, Cyrille Pitchen, linux-spi,
Marek Vasut, Mark Brown, linux-mtd, Thomas Petazzoni,
Brian Norris, David Woodhouse
Hi Boris,
On Fri, 1 Jun 2018 16:36:02 +0200, Boris Brezillon
<boris.brezillon@bootlin.com> wrote:
> Most modern QSPI controllers support can directly map a SPI memory (or
s/support// ?
> a portion of the SPI memory) in the CPU address space. Most of the time
> this brings significant performance improvements as it automates the
> whole process of sending SPI memory operations every time a new region
> is accessed.
>
> This new API allow SPI memory driver to create direct mappings and then
s/allow/allows/
s/driver/drivers/ ?
> use them to access the memory instead of using spi_mem_exec_op().
>
> Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
> ---
> drivers/spi/spi-mem.c | 267 ++++++++++++++++++++++++++++++++++++++++----
> include/linux/spi/spi-mem.h | 72 ++++++++++++
> 2 files changed, 318 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
> index 990770dfa5cf..90ea0c5263a7 100644
> --- a/drivers/spi/spi-mem.c
> +++ b/drivers/spi/spi-mem.c
> @@ -175,6 +175,44 @@ bool spi_mem_supports_op(struct spi_mem *mem, const struct spi_mem_op *op)
> }
> EXPORT_SYMBOL_GPL(spi_mem_supports_op);
>
> +static int spi_mem_access_start(struct spi_mem *mem)
> +{
> + struct spi_controller *ctlr = mem->spi->controller;
> +
> + /*
> + * Flush the message queue before executing our SPI memory
> + * operation to prevent preemption of regular SPI transfers.
> + */
> + spi_flush_queue(ctlr);
> +
> + if (ctlr->auto_runtime_pm) {
> + int ret;
> +
> + ret = pm_runtime_get_sync(ctlr->dev.parent);
> + if (ret < 0) {
> + dev_err(&ctlr->dev, "Failed to power device: %d\n",
> + ret);
> + return ret;
> + }
> + }
> +
> + mutex_lock(&ctlr->bus_lock_mutex);
> + mutex_lock(&ctlr->io_mutex);
> +
> + return 0;
> +}
> +
> +static void spi_mem_access_end(struct spi_mem *mem)
> +{
> + struct spi_controller *ctlr = mem->spi->controller;
> +
> + mutex_unlock(&ctlr->io_mutex);
> + mutex_unlock(&ctlr->bus_lock_mutex);
> +
> + if (ctlr->auto_runtime_pm)
> + pm_runtime_put(ctlr->dev.parent);
> +}
> +
> /**
> * spi_mem_exec_op() - Execute a memory operation
> * @mem: the SPI memory
> @@ -200,30 +238,13 @@ int spi_mem_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
> return -ENOTSUPP;
>
> if (ctlr->mem_ops) {
> - /*
> - * Flush the message queue before executing our SPI memory
> - * operation to prevent preemption of regular SPI transfers.
> - */
> - spi_flush_queue(ctlr);
> -
> - if (ctlr->auto_runtime_pm) {
> - ret = pm_runtime_get_sync(ctlr->dev.parent);
> - if (ret < 0) {
> - dev_err(&ctlr->dev,
> - "Failed to power device: %d\n",
> - ret);
> - return ret;
> - }
> - }
> + ret = spi_mem_access_start(mem);
> + if (ret)
> + return ret;
>
> - mutex_lock(&ctlr->bus_lock_mutex);
> - mutex_lock(&ctlr->io_mutex);
> ret = ctlr->mem_ops->exec_op(mem, op);
> - mutex_unlock(&ctlr->io_mutex);
> - mutex_unlock(&ctlr->bus_lock_mutex);
>
> - if (ctlr->auto_runtime_pm)
> - pm_runtime_put(ctlr->dev.parent);
> + spi_mem_access_end(mem);
As this is something you tell me on a weekly basis: would you mind to
separate the direct mapping support and the
spi_mem_access_start/end() helpers introduction in different
patches? :)
>
> /*
> * Some controllers only optimize specific paths (typically the
> @@ -336,6 +357,210 @@ int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
> }
> EXPORT_SYMBOL_GPL(spi_mem_adjust_op_size);
>
[...]
> +
> +/**
> + * struct spi_mem_dirmap_desc - Direct mapping descriptor
> + * @mem: the SPI memory device this direct mapping is attached to
> + * @info: information passed at direct mapping creation time
> + * @nodirmap: set to true if the SPI controller does not implement
> + * ->mem_ops->dirmap_create() or when this function returned an
s/returned/returns/
> + * error. If dirmap is true, all spi_mem_dirmap_{read,write}()
> + * calls will use spi_mem_exec_op() to access the memory. This is a
> + * degraded mode that allows higher spi_mem drivers to use the same
> + * code no matter if the controller supports direct mapping or not
> + * @priv: field pointing to controller specific data
> + *
> + * Common part of a direct mapping descriptor. This object is created by
> + * spi_mem_dirmap_create() and controller implementation of ->create_dirmap()
> + * can create/attach direct mapping resources to the descriptor in the ->priv
> + * field.
> + */
> +struct spi_mem_dirmap_desc {
> + struct spi_mem *mem;
> + struct spi_mem_dirmap_info info;
> + bool nodirmap;
> + void *priv;
> +};
> +
> /**
> * struct spi_mem - describes a SPI memory device
> * @spi: the underlying SPI device
> @@ -167,10 +210,24 @@ static inline void *spi_mem_get_drvdata(struct spi_mem *mem)
> * limitations)
> * @supports_op: check if an operation is supported by the controller
> * @exec_op: execute a SPI memory operation
> + * @dirmap_create: create a direct mapping descriptor that can later be used to
> + * access the memory device. This method is optional
Only *dirmap_create() is marked as optional while all are.
> + * @dirmap_destroy: destroy a memory descriptor previous created by
> + * ->dirmap_create()
s/previous/previously/
> + * @dirmap_read: read data from the memory device using the direct mapping
> + * created by ->dirmap_create().
> + * @dirmap_write: write data to the memory device using the direct mapping
> + * created by ->dirmap_create().
I think there is a better kernel-doc way to reference dirmap_create(),
maybe with '@' (I don't remember exactly).
> *
> * This interface should be implemented by SPI controllers providing an
> * high-level interface to execute SPI memory operation, which is usually the
> * case for QSPI controllers.
> + *
> + * Note on ->dirmap_{read,write}(): drivers should avoid accessing the direct
> + * mapping from the CPU because doing that can stall the CPU waiting for the
> + * SPI mem transaction to finish, and this will make real-time maintainers
> + * unhappy and might make your system less reactive. Instead, drivers should
> + * use DMA to access this direct mapping.
> */
> struct spi_controller_mem_ops {
> int (*adjust_op_size)(struct spi_mem *mem, struct spi_mem_op *op);
> @@ -178,6 +235,12 @@ struct spi_controller_mem_ops {
> const struct spi_mem_op *op);
> int (*exec_op)(struct spi_mem *mem,
> const struct spi_mem_op *op);
> + int (*dirmap_create)(struct spi_mem_dirmap_desc *desc);
> + void (*dirmap_destroy)(struct spi_mem_dirmap_desc *desc);
> + ssize_t (*dirmap_read)(struct spi_mem_dirmap_desc *desc,
> + u64 offs, size_t len, void *buf);
> + ssize_t (*dirmap_write)(struct spi_mem_dirmap_desc *desc,
> + u64 offs, size_t len, const void *buf);
> };
>
> /**
> @@ -236,6 +299,15 @@ bool spi_mem_supports_op(struct spi_mem *mem,
> int spi_mem_exec_op(struct spi_mem *mem,
> const struct spi_mem_op *op);
>
> +struct spi_mem_dirmap_desc *
> +spi_mem_dirmap_create(struct spi_mem *mem,
> + const struct spi_mem_dirmap_info *info);
> +void spi_mem_dirmap_destroy(struct spi_mem_dirmap_desc *desc);
> +ssize_t spi_mem_dirmap_read(struct spi_mem_dirmap_desc *desc,
> + u64 offs, size_t len, void *buf);
> +ssize_t spi_mem_dirmap_write(struct spi_mem_dirmap_desc *desc,
> + u64 offs, size_t len, const void *buf);
> +
> int spi_mem_driver_register_with_owner(struct spi_mem_driver *drv,
> struct module *owner);
>
The rest is clear for me.
Thanks,
Miquèl
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances
2018-06-01 14:36 ` [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances Boris Brezillon
@ 2018-06-07 15:08 ` Miquel Raynal
2018-06-07 15:18 ` Boris Brezillon
0 siblings, 1 reply; 8+ messages in thread
From: Miquel Raynal @ 2018-06-07 15:08 UTC (permalink / raw)
To: Boris Brezillon
Cc: Vignesh R, Richard Weinberger, Cyrille Pitchen, linux-spi,
Marek Vasut, Mark Brown, linux-mtd, Thomas Petazzoni,
Brian Norris, David Woodhouse
Hi Boris,
On Fri, 1 Jun 2018 16:36:03 +0200, Boris Brezillon
<boris.brezillon@bootlin.com> wrote:
> Make use of the SPI mem direct mapping API.
>
> Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
> ---
> drivers/mtd/devices/m25p80.c | 149 ++++++++++++++++++++++++++-----------------
> 1 file changed, 89 insertions(+), 60 deletions(-)
>
> diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
> index 1dd5f0420b5a..285fc94118ae 100644
> --- a/drivers/mtd/devices/m25p80.c
> +++ b/drivers/mtd/devices/m25p80.c
> @@ -31,6 +31,10 @@
> struct m25p {
> struct spi_mem *spimem;
> struct spi_nor spi_nor;
> + struct {
> + struct spi_mem_dirmap_desc *write;
> + struct spi_mem_dirmap_desc *read;
> + } dirmap;
> };
While reading this patch I was a bit confused with this naming. You
refer these descriptors as 'flash->dirmap->read/write' which is
confusing as this is not a function but a descriptor. Passing such
variable to a function called spi_meme_dirmap_read/write() is also
confusing IMHO (see below).
Would you mind renaming them with something like "read/write_desc"?
[...]
> @@ -107,39 +81,66 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
> u_char *buf)
> {
> struct m25p *flash = nor->priv;
> - struct spi_mem_op op =
> - SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 1),
> - SPI_MEM_OP_ADDR(nor->addr_width, from, 1),
> - SPI_MEM_OP_DUMMY(nor->read_dummy, 1),
> - SPI_MEM_OP_DATA_IN(len, buf, 1));
> - size_t remaining = len;
> - int ret;
> +
> + return spi_mem_dirmap_read(flash->dirmap.read, from, len, buf);
^
The place where I had troubles understanding because of the naming.
Thanks,
Miquèl
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 1/2] spi: spi-mem: Add a new API to support direct mapping
2018-06-07 15:01 ` Miquel Raynal
@ 2018-06-07 15:16 ` Boris Brezillon
0 siblings, 0 replies; 8+ messages in thread
From: Boris Brezillon @ 2018-06-07 15:16 UTC (permalink / raw)
To: Miquel Raynal
Cc: Vignesh R, Richard Weinberger, Cyrille Pitchen, linux-spi,
Marek Vasut, Mark Brown, linux-mtd, Thomas Petazzoni,
Brian Norris, David Woodhouse
Hi Miquel,
On Thu, 7 Jun 2018 17:01:53 +0200
Miquel Raynal <miquel.raynal@bootlin.com> wrote:
> Hi Boris,
>
> On Fri, 1 Jun 2018 16:36:02 +0200, Boris Brezillon
> <boris.brezillon@bootlin.com> wrote:
>
> > Most modern QSPI controllers support can directly map a SPI memory (or
>
> s/support// ?
Yep.
>
> > a portion of the SPI memory) in the CPU address space. Most of the time
> > this brings significant performance improvements as it automates the
> > whole process of sending SPI memory operations every time a new region
> > is accessed.
> >
> > This new API allow SPI memory driver to create direct mappings and then
>
> s/allow/allows/
> s/driver/drivers/ ?
Looks like I should have spent more time checking my commit message :-).
Will fix that.
>
> > use them to access the memory instead of using spi_mem_exec_op().
> >
> > Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
> > ---
> > drivers/spi/spi-mem.c | 267 ++++++++++++++++++++++++++++++++++++++++----
> > include/linux/spi/spi-mem.h | 72 ++++++++++++
> > 2 files changed, 318 insertions(+), 21 deletions(-)
> >
> > diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
> > index 990770dfa5cf..90ea0c5263a7 100644
> > --- a/drivers/spi/spi-mem.c
> > +++ b/drivers/spi/spi-mem.c
> > @@ -175,6 +175,44 @@ bool spi_mem_supports_op(struct spi_mem *mem, const struct spi_mem_op *op)
> > }
> > EXPORT_SYMBOL_GPL(spi_mem_supports_op);
> >
> > +static int spi_mem_access_start(struct spi_mem *mem)
> > +{
> > + struct spi_controller *ctlr = mem->spi->controller;
> > +
> > + /*
> > + * Flush the message queue before executing our SPI memory
> > + * operation to prevent preemption of regular SPI transfers.
> > + */
> > + spi_flush_queue(ctlr);
> > +
> > + if (ctlr->auto_runtime_pm) {
> > + int ret;
> > +
> > + ret = pm_runtime_get_sync(ctlr->dev.parent);
> > + if (ret < 0) {
> > + dev_err(&ctlr->dev, "Failed to power device: %d\n",
> > + ret);
> > + return ret;
> > + }
> > + }
> > +
> > + mutex_lock(&ctlr->bus_lock_mutex);
> > + mutex_lock(&ctlr->io_mutex);
> > +
> > + return 0;
> > +}
> > +
> > +static void spi_mem_access_end(struct spi_mem *mem)
> > +{
> > + struct spi_controller *ctlr = mem->spi->controller;
> > +
> > + mutex_unlock(&ctlr->io_mutex);
> > + mutex_unlock(&ctlr->bus_lock_mutex);
> > +
> > + if (ctlr->auto_runtime_pm)
> > + pm_runtime_put(ctlr->dev.parent);
> > +}
> > +
> > /**
> > * spi_mem_exec_op() - Execute a memory operation
> > * @mem: the SPI memory
> > @@ -200,30 +238,13 @@ int spi_mem_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
> > return -ENOTSUPP;
> >
> > if (ctlr->mem_ops) {
> > - /*
> > - * Flush the message queue before executing our SPI memory
> > - * operation to prevent preemption of regular SPI transfers.
> > - */
> > - spi_flush_queue(ctlr);
> > -
> > - if (ctlr->auto_runtime_pm) {
> > - ret = pm_runtime_get_sync(ctlr->dev.parent);
> > - if (ret < 0) {
> > - dev_err(&ctlr->dev,
> > - "Failed to power device: %d\n",
> > - ret);
> > - return ret;
> > - }
> > - }
> > + ret = spi_mem_access_start(mem);
> > + if (ret)
> > + return ret;
> >
> > - mutex_lock(&ctlr->bus_lock_mutex);
> > - mutex_lock(&ctlr->io_mutex);
> > ret = ctlr->mem_ops->exec_op(mem, op);
> > - mutex_unlock(&ctlr->io_mutex);
> > - mutex_unlock(&ctlr->bus_lock_mutex);
> >
> > - if (ctlr->auto_runtime_pm)
> > - pm_runtime_put(ctlr->dev.parent);
> > + spi_mem_access_end(mem);
>
> As this is something you tell me on a weekly basis: would you mind to
> separate the direct mapping support and the
> spi_mem_access_start/end() helpers introduction in different
> patches? :)
Sure, actually I was expecting this request :P.
>
> >
> > /*
> > * Some controllers only optimize specific paths (typically the
> > @@ -336,6 +357,210 @@ int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
> > }
> > EXPORT_SYMBOL_GPL(spi_mem_adjust_op_size);
> >
>
> [...]
>
> > +
> > +/**
> > + * struct spi_mem_dirmap_desc - Direct mapping descriptor
> > + * @mem: the SPI memory device this direct mapping is attached to
> > + * @info: information passed at direct mapping creation time
> > + * @nodirmap: set to true if the SPI controller does not implement
> > + * ->mem_ops->dirmap_create() or when this function returned an
>
> s/returned/returns/
No, I really meant returned here.
>
> > + * error. If dirmap is true, all spi_mem_dirmap_{read,write}()
> > + * calls will use spi_mem_exec_op() to access the memory. This is a
> > + * degraded mode that allows higher spi_mem drivers to use the same
> > + * code no matter if the controller supports direct mapping or not
> > + * @priv: field pointing to controller specific data
> > + *
> > + * Common part of a direct mapping descriptor. This object is created by
> > + * spi_mem_dirmap_create() and controller implementation of ->create_dirmap()
> > + * can create/attach direct mapping resources to the descriptor in the ->priv
> > + * field.
> > + */
> > +struct spi_mem_dirmap_desc {
> > + struct spi_mem *mem;
> > + struct spi_mem_dirmap_info info;
> > + bool nodirmap;
> > + void *priv;
> > +};
> > +
> > /**
> > * struct spi_mem - describes a SPI memory device
> > * @spi: the underlying SPI device
> > @@ -167,10 +210,24 @@ static inline void *spi_mem_get_drvdata(struct spi_mem *mem)
> > * limitations)
> > * @supports_op: check if an operation is supported by the controller
> > * @exec_op: execute a SPI memory operation
> > + * @dirmap_create: create a direct mapping descriptor that can later be used to
> > + * access the memory device. This method is optional
>
> Only *dirmap_create() is marked as optional while all are.
It's a bit more complicated than that. If ->dirmap_create() is not
implemented then all other hooks should be left empty. On the other
hand, if it's implemented then at least one of the
->dirmap_{read,write}() should be implemented. ->dirmap_destroy() is
optional. I'll try to clarify the situation.
>
> > + * @dirmap_destroy: destroy a memory descriptor previous created by
> > + * ->dirmap_create()
>
> s/previous/previously/
Yep.
>
> > + * @dirmap_read: read data from the memory device using the direct mapping
> > + * created by ->dirmap_create().
> > + * @dirmap_write: write data to the memory device using the direct mapping
> > + * created by ->dirmap_create().
>
> I think there is a better kernel-doc way to reference dirmap_create(),
> maybe with '@' (I don't remember exactly).
I think its &struct_spi_mem_ops->dirmap_create(), but I'm not sure.
Thanks for the review.
Boris
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances
2018-06-07 15:08 ` Miquel Raynal
@ 2018-06-07 15:18 ` Boris Brezillon
2018-06-07 15:23 ` Miquel Raynal
0 siblings, 1 reply; 8+ messages in thread
From: Boris Brezillon @ 2018-06-07 15:18 UTC (permalink / raw)
To: Miquel Raynal
Cc: Vignesh R, Richard Weinberger, Cyrille Pitchen, linux-spi,
Marek Vasut, Mark Brown, linux-mtd, Thomas Petazzoni,
Brian Norris, David Woodhouse
On Thu, 7 Jun 2018 17:08:24 +0200
Miquel Raynal <miquel.raynal@bootlin.com> wrote:
> Hi Boris,
>
> On Fri, 1 Jun 2018 16:36:03 +0200, Boris Brezillon
> <boris.brezillon@bootlin.com> wrote:
>
> > Make use of the SPI mem direct mapping API.
> >
> > Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
> > ---
> > drivers/mtd/devices/m25p80.c | 149 ++++++++++++++++++++++++++-----------------
> > 1 file changed, 89 insertions(+), 60 deletions(-)
> >
> > diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
> > index 1dd5f0420b5a..285fc94118ae 100644
> > --- a/drivers/mtd/devices/m25p80.c
> > +++ b/drivers/mtd/devices/m25p80.c
> > @@ -31,6 +31,10 @@
> > struct m25p {
> > struct spi_mem *spimem;
> > struct spi_nor spi_nor;
> > + struct {
> > + struct spi_mem_dirmap_desc *write;
> > + struct spi_mem_dirmap_desc *read;
> > + } dirmap;
> > };
>
> While reading this patch I was a bit confused with this naming. You
> refer these descriptors as 'flash->dirmap->read/write' which is
> confusing as this is not a function but a descriptor. Passing such
> variable to a function called spi_meme_dirmap_read/write() is also
> confusing IMHO (see below).
>
> Would you mind renaming them with something like "read/write_desc"?
Sure. How about rdesc and wdesc to keep the names short?
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances
2018-06-07 15:18 ` Boris Brezillon
@ 2018-06-07 15:23 ` Miquel Raynal
0 siblings, 0 replies; 8+ messages in thread
From: Miquel Raynal @ 2018-06-07 15:23 UTC (permalink / raw)
To: Boris Brezillon
Cc: Vignesh R, Richard Weinberger, Cyrille Pitchen, linux-spi,
Marek Vasut, Mark Brown, linux-mtd, Thomas Petazzoni,
Brian Norris, David Woodhouse
Hi Boris,
On Thu, 7 Jun 2018 17:18:46 +0200, Boris Brezillon
<boris.brezillon@bootlin.com> wrote:
> On Thu, 7 Jun 2018 17:08:24 +0200
> Miquel Raynal <miquel.raynal@bootlin.com> wrote:
>
> > Hi Boris,
> >
> > On Fri, 1 Jun 2018 16:36:03 +0200, Boris Brezillon
> > <boris.brezillon@bootlin.com> wrote:
> >
> > > Make use of the SPI mem direct mapping API.
> > >
> > > Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
> > > ---
> > > drivers/mtd/devices/m25p80.c | 149 ++++++++++++++++++++++++++-----------------
> > > 1 file changed, 89 insertions(+), 60 deletions(-)
> > >
> > > diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
> > > index 1dd5f0420b5a..285fc94118ae 100644
> > > --- a/drivers/mtd/devices/m25p80.c
> > > +++ b/drivers/mtd/devices/m25p80.c
> > > @@ -31,6 +31,10 @@
> > > struct m25p {
> > > struct spi_mem *spimem;
> > > struct spi_nor spi_nor;
> > > + struct {
> > > + struct spi_mem_dirmap_desc *write;
> > > + struct spi_mem_dirmap_desc *read;
> > > + } dirmap;
> > > };
> >
> > While reading this patch I was a bit confused with this naming. You
> > refer these descriptors as 'flash->dirmap->read/write' which is
> > confusing as this is not a function but a descriptor. Passing such
> > variable to a function called spi_meme_dirmap_read/write() is also
> > confusing IMHO (see below).
> >
> > Would you mind renaming them with something like "read/write_desc"?
>
> Sure. How about rdesc and wdesc to keep the names short?
>
Sure, it's just to avoid having descriptors named after actions .
Miquèl
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2018-06-07 15:23 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-06-01 14:36 [RFC PATCH 0/2] spi: spi-mem: Add a direct mapping API Boris Brezillon
2018-06-01 14:36 ` [RFC PATCH 1/2] spi: spi-mem: Add a new API to support direct mapping Boris Brezillon
2018-06-07 15:01 ` Miquel Raynal
2018-06-07 15:16 ` Boris Brezillon
2018-06-01 14:36 ` [RFC PATCH 2/2] mtd: m25p80: Use the SPI mem direct API to possibly improve performances Boris Brezillon
2018-06-07 15:08 ` Miquel Raynal
2018-06-07 15:18 ` Boris Brezillon
2018-06-07 15:23 ` Miquel Raynal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).