* [PATCH 01/11] crypto: qce - Add support for crypto address read
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2024-02-21 18:02 ` Sricharan Ramabadhran
2023-12-14 11:42 ` [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w Md Sadre Alam
` (9 subsequent siblings)
10 siblings, 1 reply; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Get crypto base address from DT. This will use for
command descriptor support for crypto register r/w
via BAM/DMA
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/core.c | 9 +++++++++
drivers/crypto/qce/core.h | 1 +
2 files changed, 10 insertions(+)
diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c
index 28b5fd823827..5af0dc40738a 100644
--- a/drivers/crypto/qce/core.c
+++ b/drivers/crypto/qce/core.c
@@ -192,6 +192,7 @@ static int qce_crypto_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct qce_device *qce;
+ struct resource *res;
int ret;
qce = devm_kzalloc(dev, sizeof(*qce), GFP_KERNEL);
@@ -205,6 +206,14 @@ static int qce_crypto_probe(struct platform_device *pdev)
if (IS_ERR(qce->base))
return PTR_ERR(qce->base);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -ENOMEM;
+ qce->base_dma = dma_map_resource(dev, res->start, resource_size(res),
+ DMA_BIDIRECTIONAL, 0);
+ if (dma_mapping_error(dev, qce->base_dma))
+ return -ENXIO;
+
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
if (ret < 0)
return ret;
diff --git a/drivers/crypto/qce/core.h b/drivers/crypto/qce/core.h
index 228fcd69ec51..25e2af45c047 100644
--- a/drivers/crypto/qce/core.h
+++ b/drivers/crypto/qce/core.h
@@ -39,6 +39,7 @@ struct qce_device {
struct qce_dma_data dma;
int burst_size;
unsigned int pipe_pair_id;
+ dma_addr_t base_dma;
int (*async_req_enqueue)(struct qce_device *qce,
struct crypto_async_request *req);
void (*async_req_done)(struct qce_device *qce, int ret);
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH 01/11] crypto: qce - Add support for crypto address read
2023-12-14 11:42 ` [PATCH 01/11] crypto: qce - Add support for crypto address read Md Sadre Alam
@ 2024-02-21 18:02 ` Sricharan Ramabadhran
0 siblings, 0 replies; 17+ messages in thread
From: Sricharan Ramabadhran @ 2024-02-21 18:02 UTC (permalink / raw)
To: Md Sadre Alam, thara.gopinath, herbert, davem, agross, andersson,
konrad.dybcio, vkoul, linux-crypto, linux-arm-msm, linux-kernel,
dmaengine, quic_varada
On 12/14/2023 5:12 PM, Md Sadre Alam wrote:
> Get crypto base address from DT. This will use for
> command descriptor support for crypto register r/w
> via BAM/DMA
>
> Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
> ---
> drivers/crypto/qce/core.c | 9 +++++++++
> drivers/crypto/qce/core.h | 1 +
> 2 files changed, 10 insertions(+)
>
> diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c
> index 28b5fd823827..5af0dc40738a 100644
> --- a/drivers/crypto/qce/core.c
> +++ b/drivers/crypto/qce/core.c
> @@ -192,6 +192,7 @@ static int qce_crypto_probe(struct platform_device *pdev)
> {
> struct device *dev = &pdev->dev;
> struct qce_device *qce;
> + struct resource *res;
> int ret;
>
> qce = devm_kzalloc(dev, sizeof(*qce), GFP_KERNEL);
> @@ -205,6 +206,14 @@ static int qce_crypto_probe(struct platform_device *pdev)
> if (IS_ERR(qce->base))
> return PTR_ERR(qce->base);
>
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
Can this be combined with devm_platform_get_and_ioremap_resource ?
> + if (!res)
> + return -ENOMEM;
> + qce->base_dma = dma_map_resource(dev, res->start, resource_size(res),
> + DMA_BIDIRECTIONAL, 0);
unmap in remove and error cases ?
Regards,
Sricharan
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
2023-12-14 11:42 ` [PATCH 01/11] crypto: qce - Add support for crypto address read Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-15 0:11 ` kernel test robot
` (2 more replies)
2023-12-14 11:42 ` [PATCH 03/11] crypto: qce - Convert register r/w for skcipher via BAM/DMA Md Sadre Alam
` (8 subsequent siblings)
10 siblings, 3 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Add BAM/DMA support for crypto register read/write.
With this change multiple crypto register will get
Written using bam in one go.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/core.h | 9 ++
drivers/crypto/qce/dma.c | 233 ++++++++++++++++++++++++++++++++++++++
drivers/crypto/qce/dma.h | 24 +++-
3 files changed, 265 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/qce/core.h b/drivers/crypto/qce/core.h
index 25e2af45c047..bf28dedd1509 100644
--- a/drivers/crypto/qce/core.h
+++ b/drivers/crypto/qce/core.h
@@ -40,6 +40,8 @@ struct qce_device {
int burst_size;
unsigned int pipe_pair_id;
dma_addr_t base_dma;
+ __le32 *reg_read_buf;
+ dma_addr_t reg_buf_phys;
int (*async_req_enqueue)(struct qce_device *qce,
struct crypto_async_request *req);
void (*async_req_done)(struct qce_device *qce, int ret);
@@ -59,4 +61,11 @@ struct qce_algo_ops {
int (*async_req_handle)(struct crypto_async_request *async_req);
};
+int qce_write_reg_dma(struct qce_device *qce, unsigned int offset, u32 val,
+ int cnt);
+int qce_read_reg_dma(struct qce_device *qce, unsigned int offset, void *buff,
+ int cnt);
+void qce_clear_bam_transaction(struct qce_device *qce);
+int qce_submit_cmd_desc(struct qce_device *qce, unsigned long flags);
+struct qce_bam_transaction *qce_alloc_bam_txn(struct qce_dma_data *dma);
#endif /* _CORE_H_ */
diff --git a/drivers/crypto/qce/dma.c b/drivers/crypto/qce/dma.c
index 46db5bf366b4..85c8d4107afa 100644
--- a/drivers/crypto/qce/dma.c
+++ b/drivers/crypto/qce/dma.c
@@ -4,12 +4,220 @@
*/
#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
#include <crypto/scatterwalk.h>
#include "dma.h"
+#include "core.h"
+
+#define QCE_REG_BUF_DMA_ADDR(qce, vaddr) \
+ ((qce)->reg_buf_phys + \
+ ((uint8_t *)(vaddr) - (uint8_t *)(qce)->reg_read_buf))
+
+void qce_clear_bam_transaction(struct qce_device *qce)
+{
+ struct qce_bam_transaction *qce_bam_txn = qce->dma.qce_bam_txn;
+
+ qce_bam_txn->qce_bam_ce_index = 0;
+ qce_bam_txn->qce_write_sgl_cnt = 0;
+ qce_bam_txn->qce_read_sgl_cnt = 0;
+ qce_bam_txn->qce_bam_ce_index = 0;
+ qce_bam_txn->qce_pre_bam_ce_index = 0;
+}
+
+static int qce_dma_prep_cmd_sg(struct qce_device *qce, struct dma_chan *chan,
+ struct scatterlist *qce_bam_sgl,
+ int qce_sgl_cnt, unsigned long flags,
+ enum dma_transfer_direction dir,
+ dma_async_tx_callback cb, void *cb_param)
+{
+ struct dma_async_tx_descriptor *dma_desc;
+ struct qce_desc_info *desc;
+ dma_cookie_t cookie;
+
+ desc = qce->dma.qce_bam_txn->qce_desc;
+
+ if (!qce_bam_sgl || !qce_sgl_cnt)
+ return -EINVAL;
+
+ if (!dma_map_sg(qce->dev, qce_bam_sgl,
+ qce_sgl_cnt, dir)) {
+ dev_err(qce->dev, "failure in mapping sgl for cmd desc\n");
+ return -ENOMEM;
+ }
+
+ dma_desc = dmaengine_prep_slave_sg(chan, qce_bam_sgl, qce_sgl_cnt,
+ dir, flags);
+ if (!dma_desc) {
+ pr_err("%s:failure in prep cmd desc\n", __func__);
+ dma_unmap_sg(qce->dev, qce_bam_sgl, qce_sgl_cnt, dir);
+ kfree(desc);
+ return -EINVAL;
+ }
+
+ desc->dma_desc = dma_desc;
+ desc->dma_desc->callback = cb;
+ desc->dma_desc->callback_param = cb_param;
+
+ cookie = dmaengine_submit(desc->dma_desc);
+
+ return dma_submit_error(cookie);
+}
+
+int qce_submit_cmd_desc(struct qce_device *qce, unsigned long flags)
+{
+ struct qce_bam_transaction *qce_bam_txn = qce->dma.qce_bam_txn;
+ struct dma_chan *chan = qce->dma.rxchan;
+ unsigned long desc_flags;
+ int ret = 0;
+
+ desc_flags = DMA_PREP_CMD;
+
+ /* For command descriptor always use consumer pipe
+ * it recomended as per HPG
+ */
+
+ if (qce_bam_txn->qce_read_sgl_cnt) {
+ ret = qce_dma_prep_cmd_sg(qce, chan,
+ qce_bam_txn->qce_reg_read_sgl,
+ qce_bam_txn->qce_read_sgl_cnt,
+ desc_flags, DMA_DEV_TO_MEM,
+ NULL, NULL);
+ if (ret) {
+ pr_err("error while submiting cmd desc for rx\n");
+ return ret;
+ }
+ }
+
+ if (qce_bam_txn->qce_write_sgl_cnt) {
+ ret = qce_dma_prep_cmd_sg(qce, chan,
+ qce_bam_txn->qce_reg_write_sgl,
+ qce_bam_txn->qce_write_sgl_cnt,
+ desc_flags, DMA_MEM_TO_DEV,
+ NULL, NULL);
+ }
+
+ if (ret) {
+ pr_err("error while submiting cmd desc for tx\n");
+ return ret;
+ }
+
+ qce_dma_issue_pending(&qce->dma);
+
+ return ret;
+}
+
+static void qce_prep_dma_command_desc(struct qce_device *qce,
+ struct qce_dma_data *dma, bool read, unsigned int addr,
+ void *buff, int size)
+{
+ struct qce_bam_transaction *qce_bam_txn = dma->qce_bam_txn;
+ struct bam_cmd_element *qce_bam_ce_buffer;
+ int qce_bam_ce_size, cnt, index;
+
+ index = qce_bam_txn->qce_bam_ce_index;
+ qce_bam_ce_buffer = &qce_bam_txn->qce_bam_ce[index];
+ if (read)
+ bam_prep_ce(qce_bam_ce_buffer, addr, BAM_READ_COMMAND,
+ QCE_REG_BUF_DMA_ADDR(qce,
+ (unsigned int *)buff));
+ else
+ bam_prep_ce_le32(qce_bam_ce_buffer, addr, BAM_WRITE_COMMAND,
+ *((__le32 *)buff));
+
+ if (read) {
+ cnt = qce_bam_txn->qce_read_sgl_cnt;
+ qce_bam_ce_buffer = &qce_bam_txn->qce_bam_ce
+ [qce_bam_txn->qce_pre_bam_ce_index];
+ qce_bam_txn->qce_bam_ce_index += size;
+ qce_bam_ce_size = (qce_bam_txn->qce_bam_ce_index -
+ qce_bam_txn->qce_pre_bam_ce_index) *
+ sizeof(struct bam_cmd_element);
+
+ sg_set_buf(&qce_bam_txn->qce_reg_read_sgl[cnt],
+ qce_bam_ce_buffer,
+ qce_bam_ce_size);
+
+ ++qce_bam_txn->qce_read_sgl_cnt;
+ qce_bam_txn->qce_pre_bam_ce_index =
+ qce_bam_txn->qce_bam_ce_index;
+ } else {
+ cnt = qce_bam_txn->qce_write_sgl_cnt;
+ qce_bam_ce_buffer = &qce_bam_txn->qce_bam_ce
+ [qce_bam_txn->qce_pre_bam_ce_index];
+ qce_bam_txn->qce_bam_ce_index += size;
+ qce_bam_ce_size = (qce_bam_txn->qce_bam_ce_index -
+ qce_bam_txn->qce_pre_bam_ce_index) *
+ sizeof(struct bam_cmd_element);
+
+ sg_set_buf(&qce_bam_txn->qce_reg_write_sgl[cnt],
+ qce_bam_ce_buffer,
+ qce_bam_ce_size);
+
+ ++qce_bam_txn->qce_write_sgl_cnt;
+ qce_bam_txn->qce_pre_bam_ce_index =
+ qce_bam_txn->qce_bam_ce_index;
+ }
+}
+
+int qce_write_reg_dma(struct qce_device *qce,
+ unsigned int offset, u32 val, int cnt)
+{
+ void *buff;
+ unsigned int reg_addr;
+
+ buff = &val;
+
+ reg_addr = ((unsigned int)(qce->base_dma) + offset);
+ qce_prep_dma_command_desc(qce, &qce->dma, false, reg_addr, buff, cnt);
+
+ return 0;
+}
+
+int qce_read_reg_dma(struct qce_device *qce,
+ unsigned int offset, void *buff, int cnt)
+{
+ void *vaddr;
+ unsigned int reg_addr;
+
+ reg_addr = ((unsigned int)(qce->base_dma) + offset);
+ vaddr = qce->reg_read_buf;
+
+ qce_prep_dma_command_desc(qce, &qce->dma, true, reg_addr, vaddr, cnt);
+ memcpy(buff, vaddr, 4);
+
+ return 0;
+}
+
+struct qce_bam_transaction *qce_alloc_bam_txn(struct qce_dma_data *dma)
+{
+ struct qce_bam_transaction *qce_bam_txn;
+
+ dma->qce_bam_txn = kmalloc(sizeof(*qce_bam_txn), GFP_KERNEL);
+ if (!dma->qce_bam_txn)
+ return NULL;
+
+ dma->qce_bam_txn->qce_desc = kzalloc(sizeof(struct qce_desc_info),
+ GFP_KERNEL);
+ if (!dma->qce_bam_txn->qce_desc) {
+ kfree(dma->qce_bam_txn);
+ return NULL;
+ }
+
+ sg_init_table(dma->qce_bam_txn->qce_reg_write_sgl,
+ QCE_BAM_CMD_SGL_SIZE);
+
+ sg_init_table(dma->qce_bam_txn->qce_reg_read_sgl,
+ QCE_BAM_CMD_SGL_SIZE);
+
+ qce_bam_txn = dma->qce_bam_txn;
+
+ return qce_bam_txn;
+}
int qce_dma_request(struct device *dev, struct qce_dma_data *dma)
{
+ struct qce_device *qce = container_of(dma, struct qce_device, dma);
int ret;
dma->txchan = dma_request_chan(dev, "tx");
@@ -31,6 +239,21 @@ int qce_dma_request(struct device *dev, struct qce_dma_data *dma)
dma->ignore_buf = dma->result_buf + QCE_RESULT_BUF_SZ;
+ dma->qce_bam_txn = qce_alloc_bam_txn(dma);
+ if (!dma->qce_bam_txn) {
+ pr_err("Failed to allocate bam transaction\n");
+ return -ENOMEM;
+ }
+
+ qce->reg_read_buf = dmam_alloc_coherent(qce->dev,
+ QCE_MAX_REG_READ *
+ sizeof(*qce->reg_read_buf),
+ &qce->reg_buf_phys, GFP_KERNEL);
+ if (!qce->reg_read_buf) {
+ pr_err("Failed to allocate reg_read_buf\n");
+ return -ENOMEM;
+ }
+
return 0;
error_nomem:
dma_release_channel(dma->rxchan);
@@ -41,9 +264,19 @@ int qce_dma_request(struct device *dev, struct qce_dma_data *dma)
void qce_dma_release(struct qce_dma_data *dma)
{
+ struct qce_device *qce = container_of(dma,
+ struct qce_device, dma);
+
dma_release_channel(dma->txchan);
dma_release_channel(dma->rxchan);
kfree(dma->result_buf);
+ if (qce->reg_read_buf)
+ dmam_free_coherent(qce->dev, QCE_MAX_REG_READ *
+ sizeof(*qce->reg_read_buf),
+ qce->reg_read_buf,
+ qce->reg_buf_phys);
+ kfree(dma->qce_bam_txn->qce_desc);
+ kfree(dma->qce_bam_txn);
}
struct scatterlist *
diff --git a/drivers/crypto/qce/dma.h b/drivers/crypto/qce/dma.h
index 786402169360..f10991590b3f 100644
--- a/drivers/crypto/qce/dma.h
+++ b/drivers/crypto/qce/dma.h
@@ -7,6 +7,7 @@
#define _DMA_H_
#include <linux/dmaengine.h>
+#include <linux/dma/qcom_bam_dma.h>
/* maximum data transfer block size between BAM and CE */
#define QCE_BAM_BURST_SIZE 64
@@ -14,6 +15,10 @@
#define QCE_AUTHIV_REGS_CNT 16
#define QCE_AUTH_BYTECOUNT_REGS_CNT 4
#define QCE_CNTRIV_REGS_CNT 4
+#define QCE_BAM_CMD_SGL_SIZE 64
+#define QCE_BAM_CMD_ELEMENT_SIZE 64
+#define QCE_DMA_DESC_FLAG_BAM_NWD (0x0004)
+#define QCE_MAX_REG_READ 8
struct qce_result_dump {
u32 auth_iv[QCE_AUTHIV_REGS_CNT];
@@ -27,13 +32,30 @@ struct qce_result_dump {
#define QCE_RESULT_BUF_SZ \
ALIGN(sizeof(struct qce_result_dump), QCE_BAM_BURST_SIZE)
+struct qce_bam_transaction {
+ struct bam_cmd_element qce_bam_ce[QCE_BAM_CMD_ELEMENT_SIZE];
+ struct scatterlist qce_reg_write_sgl[QCE_BAM_CMD_SGL_SIZE];
+ struct scatterlist qce_reg_read_sgl[QCE_BAM_CMD_SGL_SIZE];
+ struct qce_desc_info *qce_desc;
+ u32 qce_bam_ce_index;
+ u32 qce_pre_bam_ce_index;
+ u32 qce_write_sgl_cnt;
+ u32 qce_read_sgl_cnt;
+};
+
struct qce_dma_data {
struct dma_chan *txchan;
struct dma_chan *rxchan;
struct qce_result_dump *result_buf;
+ struct qce_bam_transaction *qce_bam_txn;
void *ignore_buf;
};
+struct qce_desc_info {
+ struct dma_async_tx_descriptor *dma_desc;
+ enum dma_data_direction dir;
+};
+
int qce_dma_request(struct device *dev, struct qce_dma_data *dma);
void qce_dma_release(struct qce_dma_data *dma);
int qce_dma_prep_sgs(struct qce_dma_data *dma, struct scatterlist *sg_in,
@@ -44,5 +66,5 @@ int qce_dma_terminate_all(struct qce_dma_data *dma);
struct scatterlist *
qce_sgtable_add(struct sg_table *sgt, struct scatterlist *sg_add,
unsigned int max_len);
-
+void qce_dma_issue_cmd_desc_pending(struct qce_dma_data *dma, bool read);
#endif /* _DMA_H_ */
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w
2023-12-14 11:42 ` [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w Md Sadre Alam
@ 2023-12-15 0:11 ` kernel test robot
2023-12-15 0:23 ` kernel test robot
2024-02-22 11:06 ` Sricharan Ramabadhran
2 siblings, 0 replies; 17+ messages in thread
From: kernel test robot @ 2023-12-15 0:11 UTC (permalink / raw)
To: Md Sadre Alam, thara.gopinath, herbert, davem, agross, andersson,
konrad.dybcio, vkoul, linux-crypto, linux-arm-msm, linux-kernel,
dmaengine, quic_srichara, quic_varada
Cc: llvm, oe-kbuild-all, quic_mdalam
Hi Md,
kernel test robot noticed the following build errors:
[auto build test ERROR on herbert-cryptodev-2.6/master]
[also build test ERROR on vkoul-dmaengine/next linus/master v6.7-rc5 next-20231214]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Md-Sadre-Alam/crypto-qce-Add-support-for-crypto-address-read/20231214-194404
base: https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
patch link: https://lore.kernel.org/r/20231214114239.2635325-3-quic_mdalam%40quicinc.com
patch subject: [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w
config: arm-randconfig-004-20231215 (https://download.01.org/0day-ci/archive/20231215/202312150743.EugqdZaA-lkp@intel.com/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231215/202312150743.EugqdZaA-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312150743.EugqdZaA-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from drivers/crypto/qce/dma.c:11:
>> drivers/crypto/qce/core.h:32:24: error: field has incomplete type 'struct tasklet_struct'
struct tasklet_struct done_tasklet;
^
drivers/crypto/qce/core.h:32:9: note: forward declaration of 'struct tasklet_struct'
struct tasklet_struct done_tasklet;
^
drivers/crypto/qce/dma.c:44:17: warning: implicit conversion from enumeration type 'enum dma_transfer_direction' to different enumeration type 'enum dma_data_direction' [-Wenum-conversion]
qce_sgl_cnt, dir)) {
~~~~~~~~~~~~~^~~~
include/linux/dma-mapping.h:419:58: note: expanded from macro 'dma_map_sg'
#define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, 0)
~~~~~~~~~~~~~~~~ ^
drivers/crypto/qce/dma.c:53:52: warning: implicit conversion from enumeration type 'enum dma_transfer_direction' to different enumeration type 'enum dma_data_direction' [-Wenum-conversion]
dma_unmap_sg(qce->dev, qce_bam_sgl, qce_sgl_cnt, dir);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~
include/linux/dma-mapping.h:420:62: note: expanded from macro 'dma_unmap_sg'
#define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, 0)
~~~~~~~~~~~~~~~~~~ ^
2 warnings and 1 error generated.
vim +32 drivers/crypto/qce/core.h
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 10
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 11 /**
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 12 * struct qce_device - crypto engine device structure
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 13 * @queue: crypto request queue
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 14 * @lock: the lock protects queue and req
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 15 * @done_tasklet: done tasklet object
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 16 * @req: current active request
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 17 * @result: result of current transform
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 18 * @base: virtual IO base
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 19 * @dev: pointer to device structure
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 20 * @core: core device clock
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 21 * @iface: interface clock
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 22 * @bus: bus clock
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 23 * @dma: pointer to dma data
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 24 * @burst_size: the crypto burst size
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 25 * @pipe_pair_id: which pipe pair id the device using
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 26 * @async_req_enqueue: invoked by every algorithm to enqueue a request
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 27 * @async_req_done: invoked by every algorithm to finish its request
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 28 */
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 29 struct qce_device {
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 30 struct crypto_queue queue;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 31 spinlock_t lock;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 @32 struct tasklet_struct done_tasklet;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 33 struct crypto_async_request *req;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 34 int result;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 35 void __iomem *base;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 36 struct device *dev;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 37 struct clk *core, *iface, *bus;
694ff00c9bb387 Thara Gopinath 2023-02-22 38 struct icc_path *mem_path;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 39 struct qce_dma_data dma;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 40 int burst_size;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 41 unsigned int pipe_pair_id;
f666e78afa2c49 Md Sadre Alam 2023-12-14 42 dma_addr_t base_dma;
74826d774de8a8 Md Sadre Alam 2023-12-14 43 __le32 *reg_read_buf;
74826d774de8a8 Md Sadre Alam 2023-12-14 44 dma_addr_t reg_buf_phys;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 45 int (*async_req_enqueue)(struct qce_device *qce,
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 46 struct crypto_async_request *req);
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 47 void (*async_req_done)(struct qce_device *qce, int ret);
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 48 };
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 49
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w
2023-12-14 11:42 ` [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w Md Sadre Alam
2023-12-15 0:11 ` kernel test robot
@ 2023-12-15 0:23 ` kernel test robot
2024-02-22 11:06 ` Sricharan Ramabadhran
2 siblings, 0 replies; 17+ messages in thread
From: kernel test robot @ 2023-12-15 0:23 UTC (permalink / raw)
To: Md Sadre Alam, thara.gopinath, herbert, davem, agross, andersson,
konrad.dybcio, vkoul, linux-crypto, linux-arm-msm, linux-kernel,
dmaengine, quic_srichara, quic_varada
Cc: oe-kbuild-all, quic_mdalam
Hi Md,
kernel test robot noticed the following build errors:
[auto build test ERROR on herbert-cryptodev-2.6/master]
[also build test ERROR on vkoul-dmaengine/next linus/master v6.7-rc5 next-20231214]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Md-Sadre-Alam/crypto-qce-Add-support-for-crypto-address-read/20231214-194404
base: https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
patch link: https://lore.kernel.org/r/20231214114239.2635325-3-quic_mdalam%40quicinc.com
patch subject: [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w
config: m68k-allmodconfig (https://download.01.org/0day-ci/archive/20231215/202312150856.hFSqQCnr-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231215/202312150856.hFSqQCnr-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312150856.hFSqQCnr-lkp@intel.com/
All error/warnings (new ones prefixed by >>):
In file included from drivers/crypto/qce/dma.c:11:
>> drivers/crypto/qce/core.h:32:31: error: field 'done_tasklet' has incomplete type
32 | struct tasklet_struct done_tasklet;
| ^~~~~~~~~~~~
In file included from drivers/crypto/qce/dma.c:7:
drivers/crypto/qce/dma.c: In function 'qce_dma_prep_cmd_sg':
>> drivers/crypto/qce/dma.c:44:38: warning: implicit conversion from 'enum dma_transfer_direction' to 'enum dma_data_direction' [-Wenum-conversion]
44 | qce_sgl_cnt, dir)) {
| ^~~
include/linux/dma-mapping.h:419:58: note: in definition of macro 'dma_map_sg'
419 | #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, 0)
| ^
drivers/crypto/qce/dma.c:53:66: warning: implicit conversion from 'enum dma_transfer_direction' to 'enum dma_data_direction' [-Wenum-conversion]
53 | dma_unmap_sg(qce->dev, qce_bam_sgl, qce_sgl_cnt, dir);
| ^~~
include/linux/dma-mapping.h:420:62: note: in definition of macro 'dma_unmap_sg'
420 | #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, 0)
| ^
vim +/done_tasklet +32 drivers/crypto/qce/core.h
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 10
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 11 /**
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 12 * struct qce_device - crypto engine device structure
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 13 * @queue: crypto request queue
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 14 * @lock: the lock protects queue and req
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 15 * @done_tasklet: done tasklet object
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 16 * @req: current active request
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 17 * @result: result of current transform
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 18 * @base: virtual IO base
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 19 * @dev: pointer to device structure
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 20 * @core: core device clock
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 21 * @iface: interface clock
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 22 * @bus: bus clock
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 23 * @dma: pointer to dma data
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 24 * @burst_size: the crypto burst size
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 25 * @pipe_pair_id: which pipe pair id the device using
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 26 * @async_req_enqueue: invoked by every algorithm to enqueue a request
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 27 * @async_req_done: invoked by every algorithm to finish its request
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 28 */
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 29 struct qce_device {
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 30 struct crypto_queue queue;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 31 spinlock_t lock;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 @32 struct tasklet_struct done_tasklet;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 33 struct crypto_async_request *req;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 34 int result;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 35 void __iomem *base;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 36 struct device *dev;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 37 struct clk *core, *iface, *bus;
694ff00c9bb387 Thara Gopinath 2023-02-22 38 struct icc_path *mem_path;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 39 struct qce_dma_data dma;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 40 int burst_size;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 41 unsigned int pipe_pair_id;
f666e78afa2c49 Md Sadre Alam 2023-12-14 42 dma_addr_t base_dma;
74826d774de8a8 Md Sadre Alam 2023-12-14 43 __le32 *reg_read_buf;
74826d774de8a8 Md Sadre Alam 2023-12-14 44 dma_addr_t reg_buf_phys;
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 45 int (*async_req_enqueue)(struct qce_device *qce,
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 46 struct crypto_async_request *req);
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 47 void (*async_req_done)(struct qce_device *qce, int ret);
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 48 };
ec8f5d8f6f76b9 Stanimir Varbanov 2014-06-25 49
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w
2023-12-14 11:42 ` [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w Md Sadre Alam
2023-12-15 0:11 ` kernel test robot
2023-12-15 0:23 ` kernel test robot
@ 2024-02-22 11:06 ` Sricharan Ramabadhran
2 siblings, 0 replies; 17+ messages in thread
From: Sricharan Ramabadhran @ 2024-02-22 11:06 UTC (permalink / raw)
To: Md Sadre Alam, thara.gopinath, herbert, davem, agross, andersson,
konrad.dybcio, vkoul, linux-crypto, linux-arm-msm, linux-kernel,
dmaengine, quic_varada
On 12/14/2023 5:12 PM, Md Sadre Alam wrote:
> Add BAM/DMA support for crypto register read/write.
> With this change multiple crypto register will get
> Written using bam in one go.
>
> Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
> ---
> drivers/crypto/qce/core.h | 9 ++
> drivers/crypto/qce/dma.c | 233 ++++++++++++++++++++++++++++++++++++++
> drivers/crypto/qce/dma.h | 24 +++-
> 3 files changed, 265 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/crypto/qce/core.h b/drivers/crypto/qce/core.h
> index 25e2af45c047..bf28dedd1509 100644
> --- a/drivers/crypto/qce/core.h
> +++ b/drivers/crypto/qce/core.h
> @@ -40,6 +40,8 @@ struct qce_device {
> int burst_size;
> unsigned int pipe_pair_id;
> dma_addr_t base_dma;
> + __le32 *reg_read_buf;
> + dma_addr_t reg_buf_phys;
> int (*async_req_enqueue)(struct qce_device *qce,
> struct crypto_async_request *req);
> void (*async_req_done)(struct qce_device *qce, int ret);
> @@ -59,4 +61,11 @@ struct qce_algo_ops {
> int (*async_req_handle)(struct crypto_async_request *async_req);
> };
>
> +int qce_write_reg_dma(struct qce_device *qce, unsigned int offset, u32 val,
> + int cnt);
> +int qce_read_reg_dma(struct qce_device *qce, unsigned int offset, void *buff,
> + int cnt);
> +void qce_clear_bam_transaction(struct qce_device *qce);
> +int qce_submit_cmd_desc(struct qce_device *qce, unsigned long flags);
> +struct qce_bam_transaction *qce_alloc_bam_txn(struct qce_dma_data *dma);
> #endif /* _CORE_H_ */
> diff --git a/drivers/crypto/qce/dma.c b/drivers/crypto/qce/dma.c
> index 46db5bf366b4..85c8d4107afa 100644
> --- a/drivers/crypto/qce/dma.c
> +++ b/drivers/crypto/qce/dma.c
> @@ -4,12 +4,220 @@
> */
>
> #include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> #include <crypto/scatterwalk.h>
>
> #include "dma.h"
> +#include "core.h"
alphabetical order
> +
> +#define QCE_REG_BUF_DMA_ADDR(qce, vaddr) \
> + ((qce)->reg_buf_phys + \
> + ((uint8_t *)(vaddr) - (uint8_t *)(qce)->reg_read_buf))
> +
> +void qce_clear_bam_transaction(struct qce_device *qce)
> +{
> + struct qce_bam_transaction *qce_bam_txn = qce->dma.qce_bam_txn;
> +
> + qce_bam_txn->qce_bam_ce_index = 0;
> + qce_bam_txn->qce_write_sgl_cnt = 0;
> + qce_bam_txn->qce_read_sgl_cnt = 0;
> + qce_bam_txn->qce_bam_ce_index = 0;
> + qce_bam_txn->qce_pre_bam_ce_index = 0;
> +}
> +
memset ?
> +static int qce_dma_prep_cmd_sg(struct qce_device *qce, struct dma_chan *chan,
> + struct scatterlist *qce_bam_sgl,
> + int qce_sgl_cnt, unsigned long flags,
> + enum dma_transfer_direction dir,
> + dma_async_tx_callback cb, void *cb_param)
> +{
Fix the alignment.
> + struct dma_async_tx_descriptor *dma_desc;
> + struct qce_desc_info *desc;
> + dma_cookie_t cookie;
> +
> + desc = qce->dma.qce_bam_txn->qce_desc;
> +
> + if (!qce_bam_sgl || !qce_sgl_cnt)
> + return -EINVAL;
> +
> + if (!dma_map_sg(qce->dev, qce_bam_sgl,
> + qce_sgl_cnt, dir)) {
> + dev_err(qce->dev, "failure in mapping sgl for cmd desc\n");
> + return -ENOMEM;
> + }
> +
> + dma_desc = dmaengine_prep_slave_sg(chan, qce_bam_sgl, qce_sgl_cnt,
> + dir, flags);
> + if (!dma_desc) {
> + pr_err("%s:failure in prep cmd desc\n", __func__);
> + dma_unmap_sg(qce->dev, qce_bam_sgl, qce_sgl_cnt, dir);
> + kfree(desc);
> + return -EINVAL;
> + }
> +
> + desc->dma_desc = dma_desc;
> + desc->dma_desc->callback = cb;
> + desc->dma_desc->callback_param = cb_param;
> +
you are overwriting same qce_desc here ?
> + cookie = dmaengine_submit(desc->dma_desc);
> +
> + return dma_submit_error(cookie);
> +}
> +
> +int qce_submit_cmd_desc(struct qce_device *qce, unsigned long flags)
> +{
> + struct qce_bam_transaction *qce_bam_txn = qce->dma.qce_bam_txn;
> + struct dma_chan *chan = qce->dma.rxchan;
> + unsigned long desc_flags;
> + int ret = 0;
> +
> + desc_flags = DMA_PREP_CMD;
> +
> + /* For command descriptor always use consumer pipe
> + * it recomended as per HPG
> + */
> +
> + if (qce_bam_txn->qce_read_sgl_cnt) {
> + ret = qce_dma_prep_cmd_sg(qce, chan,
> + qce_bam_txn->qce_reg_read_sgl,
> + qce_bam_txn->qce_read_sgl_cnt,
> + desc_flags, DMA_DEV_TO_MEM,
> + NULL, NULL);
alignment.
> + if (ret) {
> + pr_err("error while submiting cmd desc for rx\n");
> + return ret;
> + }
> + }
> +
> + if (qce_bam_txn->qce_write_sgl_cnt) {
> + ret = qce_dma_prep_cmd_sg(qce, chan,
Here chan is still pointing to rxchan. Is this correct ?
> + qce_bam_txn->qce_reg_write_sgl,
> + qce_bam_txn->qce_write_sgl_cnt,
> + desc_flags, DMA_MEM_TO_DEV,
> + NULL, NULL);
> + }
> +
> + if (ret) {
> + pr_err("error while submiting cmd desc for tx\n");
> + return ret;
> + }
> +
> + qce_dma_issue_pending(&qce->dma);
> +
> + return ret;
> +}
> +
> +static void qce_prep_dma_command_desc(struct qce_device *qce,
> + struct qce_dma_data *dma, bool read, unsigned int addr,
> + void *buff, int size)
> +{
alignment
> + struct qce_bam_transaction *qce_bam_txn = dma->qce_bam_txn;
> + struct bam_cmd_element *qce_bam_ce_buffer;
> + int qce_bam_ce_size, cnt, index;
> +
> + index = qce_bam_txn->qce_bam_ce_index;
> + qce_bam_ce_buffer = &qce_bam_txn->qce_bam_ce[index];
> + if (read)
> + bam_prep_ce(qce_bam_ce_buffer, addr, BAM_READ_COMMAND,
> + QCE_REG_BUF_DMA_ADDR(qce,
> + (unsigned int *)buff));
> + else
> + bam_prep_ce_le32(qce_bam_ce_buffer, addr, BAM_WRITE_COMMAND,
> + *((__le32 *)buff));
> +
> + if (read) {
> + cnt = qce_bam_txn->qce_read_sgl_cnt;
> + qce_bam_ce_buffer = &qce_bam_txn->qce_bam_ce
> + [qce_bam_txn->qce_pre_bam_ce_index];
> + qce_bam_txn->qce_bam_ce_index += size;
> + qce_bam_ce_size = (qce_bam_txn->qce_bam_ce_index -
> + qce_bam_txn->qce_pre_bam_ce_index) *
> + sizeof(struct bam_cmd_element);
> +
> + sg_set_buf(&qce_bam_txn->qce_reg_read_sgl[cnt],
> + qce_bam_ce_buffer,
> + qce_bam_ce_size);
> +
> + ++qce_bam_txn->qce_read_sgl_cnt;
> + qce_bam_txn->qce_pre_bam_ce_index =
> + qce_bam_txn->qce_bam_ce_index;
> + } else {
> + cnt = qce_bam_txn->qce_write_sgl_cnt;
> + qce_bam_ce_buffer = &qce_bam_txn->qce_bam_ce
> + [qce_bam_txn->qce_pre_bam_ce_index];
> + qce_bam_txn->qce_bam_ce_index += size;
> + qce_bam_ce_size = (qce_bam_txn->qce_bam_ce_index -
> + qce_bam_txn->qce_pre_bam_ce_index) *
> + sizeof(struct bam_cmd_element);
> +
> + sg_set_buf(&qce_bam_txn->qce_reg_write_sgl[cnt],
> + qce_bam_ce_buffer,
> + qce_bam_ce_size);
> +
> + ++qce_bam_txn->qce_write_sgl_cnt;
> + qce_bam_txn->qce_pre_bam_ce_index =
> + qce_bam_txn->qce_bam_ce_index;
> + }
> +}
Above piece of hunk can be improved.
*) Between read/write only array name is different, rest can be made
common
*) Can use some standard circular buffer apis, wrapping should be
taken care of.
> +
> +int qce_write_reg_dma(struct qce_device *qce,
> + unsigned int offset, u32 val, int cnt)
> +{
> + void *buff;
> + unsigned int reg_addr;
> +
> + buff = &val;
> +
> + reg_addr = ((unsigned int)(qce->base_dma) + offset);
Is this type-cast really required ?
The entire function can be folded in one line ?
> + qce_prep_dma_command_desc(qce, &qce->dma, false, reg_addr, buff, cnt);
> +
> + return 0;
> +}
> +
> +int qce_read_reg_dma(struct qce_device *qce,
> + unsigned int offset, void *buff, int cnt)
> +{
> + void *vaddr;
> + unsigned int reg_addr;
> +
> + reg_addr = ((unsigned int)(qce->base_dma) + offset);
same comment as above.
> + vaddr = qce->reg_read_buf;
> +
> + qce_prep_dma_command_desc(qce, &qce->dma, true, reg_addr, vaddr, cnt);
> + memcpy(buff, vaddr, 4);
> +
> + return 0;
> +}
> +
> +struct qce_bam_transaction *qce_alloc_bam_txn(struct qce_dma_data *dma)
> +{
> + struct qce_bam_transaction *qce_bam_txn;
> +
> + dma->qce_bam_txn = kmalloc(sizeof(*qce_bam_txn), GFP_KERNEL);
> + if (!dma->qce_bam_txn)
> + return NULL;
> +
> + dma->qce_bam_txn->qce_desc = kzalloc(sizeof(struct qce_desc_info),
> + GFP_KERNEL);
only one instance ?
> + if (!dma->qce_bam_txn->qce_desc) {
> + kfree(dma->qce_bam_txn);
> + return NULL;
> + }
> +
> + sg_init_table(dma->qce_bam_txn->qce_reg_write_sgl,
> + QCE_BAM_CMD_SGL_SIZE);
> +
> + sg_init_table(dma->qce_bam_txn->qce_reg_read_sgl,
> + QCE_BAM_CMD_SGL_SIZE);
> +
> + qce_bam_txn = dma->qce_bam_txn;
> +
> + return qce_bam_txn;
return dma->qce_bam_txn ??
> +}
>
> int qce_dma_request(struct device *dev, struct qce_dma_data *dma)
> {
> + struct qce_device *qce = container_of(dma, struct qce_device, dma);
> int ret;
>
> dma->txchan = dma_request_chan(dev, "tx");
> @@ -31,6 +239,21 @@ int qce_dma_request(struct device *dev, struct qce_dma_data *dma)
>
> dma->ignore_buf = dma->result_buf + QCE_RESULT_BUF_SZ;
>
> + dma->qce_bam_txn = qce_alloc_bam_txn(dma);
> + if (!dma->qce_bam_txn) {
> + pr_err("Failed to allocate bam transaction\n");
> + return -ENOMEM;
> + }
> +
> + qce->reg_read_buf = dmam_alloc_coherent(qce->dev,
> + QCE_MAX_REG_READ *
> + sizeof(*qce->reg_read_buf),
> + &qce->reg_buf_phys, GFP_KERNEL);
alignment
> + if (!qce->reg_read_buf) {
> + pr_err("Failed to allocate reg_read_buf\n");
> + return -ENOMEM;
> + }
> +
> return 0;
> error_nomem:
> dma_release_channel(dma->rxchan);
> @@ -41,9 +264,19 @@ int qce_dma_request(struct device *dev, struct qce_dma_data *dma)
>
> void qce_dma_release(struct qce_dma_data *dma)
> {
> + struct qce_device *qce = container_of(dma,
> + struct qce_device, dma);
> +
> dma_release_channel(dma->txchan);
> dma_release_channel(dma->rxchan);
> kfree(dma->result_buf);
> + if (qce->reg_read_buf)
is this check required ?
> + dmam_free_coherent(qce->dev, QCE_MAX_REG_READ *
> + sizeof(*qce->reg_read_buf),
> + qce->reg_read_buf,
> + qce->reg_buf_phys);
> + kfree(dma->qce_bam_txn->qce_desc);
> + kfree(dma->qce_bam_txn);
> }
>
> struct scatterlist *
> diff --git a/drivers/crypto/qce/dma.h b/drivers/crypto/qce/dma.h
> index 786402169360..f10991590b3f 100644
> --- a/drivers/crypto/qce/dma.h
> +++ b/drivers/crypto/qce/dma.h
> @@ -7,6 +7,7 @@
> #define _DMA_H_
>
> #include <linux/dmaengine.h>
> +#include <linux/dma/qcom_bam_dma.h>
>
> /* maximum data transfer block size between BAM and CE */
> #define QCE_BAM_BURST_SIZE 64
> @@ -14,6 +15,10 @@
> #define QCE_AUTHIV_REGS_CNT 16
> #define QCE_AUTH_BYTECOUNT_REGS_CNT 4
> #define QCE_CNTRIV_REGS_CNT 4
> +#define QCE_BAM_CMD_SGL_SIZE 64
> +#define QCE_BAM_CMD_ELEMENT_SIZE 64
> +#define QCE_DMA_DESC_FLAG_BAM_NWD (0x0004)
> +#define QCE_MAX_REG_READ 8
>
> struct qce_result_dump {
> u32 auth_iv[QCE_AUTHIV_REGS_CNT];
> @@ -27,13 +32,30 @@ struct qce_result_dump {
> #define QCE_RESULT_BUF_SZ \
> ALIGN(sizeof(struct qce_result_dump), QCE_BAM_BURST_SIZE)
>
> +struct qce_bam_transaction {
> + struct bam_cmd_element qce_bam_ce[QCE_BAM_CMD_ELEMENT_SIZE];
Any reason why this is not dmam_alloc_coherent ?
Regards,
Sricharan
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 03/11] crypto: qce - Convert register r/w for skcipher via BAM/DMA
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
2023-12-14 11:42 ` [PATCH 01/11] crypto: qce - Add support for crypto address read Md Sadre Alam
2023-12-14 11:42 ` [PATCH 02/11] crypto: qce - Add bam dma support for crypto register r/w Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2024-02-22 11:26 ` Sricharan Ramabadhran
2023-12-14 11:42 ` [PATCH 04/11] crypto: qce - Convert register r/w for sha " Md Sadre Alam
` (7 subsequent siblings)
10 siblings, 1 reply; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Convert register read/write for skcipher via BAM/DMA.
with this change all the crypto register configuration
will be done via BAM/DMA. This change will prepare command
descriptor for all register and write it once.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/common.c | 42 +++++++++++++++++++++--------------
drivers/crypto/qce/skcipher.c | 12 ++++++++++
2 files changed, 37 insertions(+), 17 deletions(-)
diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index 04253a8d3340..d1da6b1938f3 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -34,7 +34,7 @@ static inline void qce_write_array(struct qce_device *qce, u32 offset,
int i;
for (i = 0; i < len; i++)
- qce_write(qce, offset + i * sizeof(u32), val[i]);
+ qce_write_reg_dma(qce, offset + i * sizeof(u32), val[i], 1);
}
static inline void
@@ -43,7 +43,7 @@ qce_clear_array(struct qce_device *qce, u32 offset, unsigned int len)
int i;
for (i = 0; i < len; i++)
- qce_write(qce, offset + i * sizeof(u32), 0);
+ qce_write_reg_dma(qce, offset + i * sizeof(u32), 0, 1);
}
static u32 qce_config_reg(struct qce_device *qce, int little)
@@ -86,16 +86,16 @@ static void qce_setup_config(struct qce_device *qce)
config = qce_config_reg(qce, 0);
/* clear status */
- qce_write(qce, REG_STATUS, 0);
- qce_write(qce, REG_CONFIG, config);
+ qce_write_reg_dma(qce, REG_STATUS, 0, 1);
+ qce_write_reg_dma(qce, REG_CONFIG, config, 1);
}
static inline void qce_crypto_go(struct qce_device *qce, bool result_dump)
{
if (result_dump)
- qce_write(qce, REG_GOPROC, BIT(GO_SHIFT) | BIT(RESULTS_DUMP_SHIFT));
+ qce_write_reg_dma(qce, REG_GOPROC, BIT(GO_SHIFT) | BIT(RESULTS_DUMP_SHIFT), 1);
else
- qce_write(qce, REG_GOPROC, BIT(GO_SHIFT));
+ qce_write_reg_dma(qce, REG_GOPROC, BIT(GO_SHIFT), 1);
}
#if defined(CONFIG_CRYPTO_DEV_QCE_SHA) || defined(CONFIG_CRYPTO_DEV_QCE_AEAD)
@@ -308,7 +308,7 @@ static void qce_xtskey(struct qce_device *qce, const u8 *enckey,
/* Set data unit size to cryptlen. Anything else causes
* crypto engine to return back incorrect results.
*/
- qce_write(qce, REG_ENCR_XTS_DU_SIZE, cryptlen);
+ qce_write_reg_dma(qce, REG_ENCR_XTS_DU_SIZE, cryptlen, 1);
}
static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
@@ -325,7 +325,9 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
u32 encr_cfg = 0, auth_cfg = 0, config;
unsigned int ivsize = rctx->ivsize;
unsigned long flags = rctx->flags;
+ int ret;
+ qce_clear_bam_transaction(qce);
qce_setup_config(qce);
if (IS_XTS(flags))
@@ -336,7 +338,7 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
qce_cpu_to_be32p_array(enckey, ctx->enc_key, keylen);
enckey_words = keylen / sizeof(u32);
- qce_write(qce, REG_AUTH_SEG_CFG, auth_cfg);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_CFG, auth_cfg, 1);
encr_cfg = qce_encr_cfg(flags, keylen);
@@ -369,25 +371,31 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
if (IS_ENCRYPT(flags))
encr_cfg |= BIT(ENCODE_SHIFT);
- qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg);
- qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen);
- qce_write(qce, REG_ENCR_SEG_START, 0);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_CFG, encr_cfg, 1);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen, 1);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_START, 0, 1);
if (IS_CTR(flags)) {
- qce_write(qce, REG_CNTR_MASK, ~0);
- qce_write(qce, REG_CNTR_MASK0, ~0);
- qce_write(qce, REG_CNTR_MASK1, ~0);
- qce_write(qce, REG_CNTR_MASK2, ~0);
+ qce_write_reg_dma(qce, REG_CNTR_MASK, ~0, 1);
+ qce_write_reg_dma(qce, REG_CNTR_MASK0, ~0, 1);
+ qce_write_reg_dma(qce, REG_CNTR_MASK1, ~0, 1);
+ qce_write_reg_dma(qce, REG_CNTR_MASK2, ~0, 1);
}
- qce_write(qce, REG_SEG_SIZE, rctx->cryptlen);
+ qce_write_reg_dma(qce, REG_SEG_SIZE, rctx->cryptlen, 1);
/* get little endianness */
config = qce_config_reg(qce, 1);
- qce_write(qce, REG_CONFIG, config);
+ qce_write_reg_dma(qce, REG_CONFIG, config, 1);
qce_crypto_go(qce, true);
+ ret = qce_submit_cmd_desc(qce, 0);
+ if (ret) {
+ dev_err(qce->dev, "Error in skcipher cmd descriptor\n");
+ return ret;
+ }
+
return 0;
}
#endif
diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
index 5b493fdc1e74..fa7ee5db9aa0 100644
--- a/drivers/crypto/qce/skcipher.c
+++ b/drivers/crypto/qce/skcipher.c
@@ -31,6 +31,7 @@ static void qce_skcipher_done(void *data)
struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req);
struct qce_alg_template *tmpl = to_cipher_tmpl(crypto_skcipher_reqtfm(req));
struct qce_device *qce = tmpl->qce;
+ struct qce_bam_transaction *qce_bam_txn = qce->dma.qce_bam_txn;
struct qce_result_dump *result_buf = qce->dma.result_buf;
enum dma_data_direction dir_src, dir_dst;
u32 status;
@@ -52,6 +53,17 @@ static void qce_skcipher_done(void *data)
sg_free_table(&rctx->dst_tbl);
+ if (qce_bam_txn->qce_read_sgl_cnt)
+ dma_unmap_sg(qce->dev,
+ qce_bam_txn->qce_reg_read_sgl,
+ qce_bam_txn->qce_read_sgl_cnt,
+ DMA_DEV_TO_MEM);
+ if (qce_bam_txn->qce_write_sgl_cnt)
+ dma_unmap_sg(qce->dev,
+ qce_bam_txn->qce_reg_write_sgl,
+ qce_bam_txn->qce_write_sgl_cnt,
+ DMA_MEM_TO_DEV);
+
error = qce_check_status(qce, &status);
if (error < 0)
dev_dbg(qce->dev, "skcipher operation error (%x)\n", status);
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH 03/11] crypto: qce - Convert register r/w for skcipher via BAM/DMA
2023-12-14 11:42 ` [PATCH 03/11] crypto: qce - Convert register r/w for skcipher via BAM/DMA Md Sadre Alam
@ 2024-02-22 11:26 ` Sricharan Ramabadhran
0 siblings, 0 replies; 17+ messages in thread
From: Sricharan Ramabadhran @ 2024-02-22 11:26 UTC (permalink / raw)
To: Md Sadre Alam, thara.gopinath, herbert, davem, agross, andersson,
konrad.dybcio, vkoul, linux-crypto, linux-arm-msm, linux-kernel,
dmaengine, quic_varada
On 12/14/2023 5:12 PM, Md Sadre Alam wrote:
> Convert register read/write for skcipher via BAM/DMA.
> with this change all the crypto register configuration
> will be done via BAM/DMA. This change will prepare command
> descriptor for all register and write it once.
>
> Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
> ---
> drivers/crypto/qce/common.c | 42 +++++++++++++++++++++--------------
> drivers/crypto/qce/skcipher.c | 12 ++++++++++
> 2 files changed, 37 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
> index 04253a8d3340..d1da6b1938f3 100644
> --- a/drivers/crypto/qce/common.c
> +++ b/drivers/crypto/qce/common.c
Changes to common.c should have been in patch #2 ?
Btw, if we are making cmd desc approach as default for all socs
we should have it tested in all platforms ?
Regards,
Sricharan
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 04/11] crypto: qce - Convert register r/w for sha via BAM/DMA
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (2 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 03/11] crypto: qce - Convert register r/w for skcipher via BAM/DMA Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-14 11:42 ` [PATCH 05/11] crypto: qce - Convert register r/w for aead " Md Sadre Alam
` (6 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Convert register read/write for sha via BAM/DMA.
with this change all the crypto register configuration
will be done via BAM/DMA. This change will prepare command
descriptor for all register and write it once.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/common.c | 26 +++++++++++++++++---------
drivers/crypto/qce/sha.c | 12 ++++++++++++
2 files changed, 29 insertions(+), 9 deletions(-)
diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index d1da6b1938f3..d485762a3fdc 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -157,17 +157,19 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
__be32 mackey[QCE_SHA_HMAC_KEY_SIZE / sizeof(__be32)] = {0};
u32 auth_cfg = 0, config;
unsigned int iv_words;
+ int ret;
/* if not the last, the size has to be on the block boundary */
if (!rctx->last_blk && req->nbytes % blocksize)
return -EINVAL;
+ qce_clear_bam_transaction(qce);
qce_setup_config(qce);
if (IS_CMAC(rctx->flags)) {
- qce_write(qce, REG_AUTH_SEG_CFG, 0);
- qce_write(qce, REG_ENCR_SEG_CFG, 0);
- qce_write(qce, REG_ENCR_SEG_SIZE, 0);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_CFG, 0, 1);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_CFG, 0, 1);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_SIZE, 0, 1);
qce_clear_array(qce, REG_AUTH_IV0, 16);
qce_clear_array(qce, REG_AUTH_KEY0, 16);
qce_clear_array(qce, REG_AUTH_BYTECNT0, 4);
@@ -213,18 +215,24 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
auth_cfg &= ~BIT(AUTH_FIRST_SHIFT);
go_proc:
- qce_write(qce, REG_AUTH_SEG_CFG, auth_cfg);
- qce_write(qce, REG_AUTH_SEG_SIZE, req->nbytes);
- qce_write(qce, REG_AUTH_SEG_START, 0);
- qce_write(qce, REG_ENCR_SEG_CFG, 0);
- qce_write(qce, REG_SEG_SIZE, req->nbytes);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_CFG, auth_cfg, 1);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_SIZE, req->nbytes, 1);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_START, 0, 1);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_CFG, 0, 1);
+ qce_write_reg_dma(qce, REG_SEG_SIZE, req->nbytes, 1);
/* get little endianness */
config = qce_config_reg(qce, 1);
- qce_write(qce, REG_CONFIG, config);
+ qce_write_reg_dma(qce, REG_CONFIG, config, 1);
qce_crypto_go(qce, true);
+ ret = qce_submit_cmd_desc(qce, 0);
+ if (ret) {
+ dev_err(qce->dev, "Error in sha cmd descriptor\n");
+ return ret;
+ }
+
return 0;
}
#endif
diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
index fc72af8aa9a7..f850c6206a31 100644
--- a/drivers/crypto/qce/sha.c
+++ b/drivers/crypto/qce/sha.c
@@ -41,6 +41,7 @@ static void qce_ahash_done(void *data)
struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req);
struct qce_alg_template *tmpl = to_ahash_tmpl(async_req->tfm);
struct qce_device *qce = tmpl->qce;
+ struct qce_bam_transaction *qce_bam_txn = qce->dma.qce_bam_txn;
struct qce_result_dump *result = qce->dma.result_buf;
unsigned int digestsize = crypto_ahash_digestsize(ahash);
int error;
@@ -60,6 +61,17 @@ static void qce_ahash_done(void *data)
rctx->byte_count[0] = cpu_to_be32(result->auth_byte_count[0]);
rctx->byte_count[1] = cpu_to_be32(result->auth_byte_count[1]);
+ if (qce_bam_txn->qce_read_sgl_cnt)
+ dma_unmap_sg(qce->dev,
+ qce_bam_txn->qce_reg_read_sgl,
+ qce_bam_txn->qce_read_sgl_cnt,
+ DMA_DEV_TO_MEM);
+ if (qce_bam_txn->qce_write_sgl_cnt)
+ dma_unmap_sg(qce->dev,
+ qce_bam_txn->qce_reg_write_sgl,
+ qce_bam_txn->qce_write_sgl_cnt,
+ DMA_MEM_TO_DEV);
+
error = qce_check_status(qce, &status);
if (error < 0)
dev_dbg(qce->dev, "ahash operation error (%x)\n", status);
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH 05/11] crypto: qce - Convert register r/w for aead via BAM/DMA
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (3 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 04/11] crypto: qce - Convert register r/w for sha " Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-14 11:42 ` [PATCH 06/11] drivers: bam_dma: Add LOCK & UNLOCK flag support Md Sadre Alam
` (5 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Convert register read/write for skcipher via BAM/DMA.
with this change all the crypto register configuration
will be done via BAM/DMA. This change will prepare command
descriptor for all register and write it once.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/aead.c | 12 ++++++++++++
drivers/crypto/qce/common.c | 38 ++++++++++++++++++++++---------------
2 files changed, 35 insertions(+), 15 deletions(-)
diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
index 7d811728f047..c03600f396be 100644
--- a/drivers/crypto/qce/aead.c
+++ b/drivers/crypto/qce/aead.c
@@ -29,6 +29,7 @@ static void qce_aead_done(void *data)
struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
struct qce_device *qce = tmpl->qce;
struct qce_result_dump *result_buf = qce->dma.result_buf;
+ struct qce_bam_transaction *qce_bam_txn = qce->dma.qce_bam_txn;
enum dma_data_direction dir_src, dir_dst;
bool diff_dst;
int error;
@@ -50,6 +51,17 @@ static void qce_aead_done(void *data)
dma_unmap_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
+ if (qce_bam_txn->qce_read_sgl_cnt)
+ dma_unmap_sg(qce->dev,
+ qce_bam_txn->qce_reg_read_sgl,
+ qce_bam_txn->qce_read_sgl_cnt,
+ DMA_DEV_TO_MEM);
+ if (qce_bam_txn->qce_write_sgl_cnt)
+ dma_unmap_sg(qce->dev,
+ qce_bam_txn->qce_reg_write_sgl,
+ qce_bam_txn->qce_write_sgl_cnt,
+ DMA_MEM_TO_DEV);
+
if (IS_CCM(rctx->flags)) {
if (req->assoclen) {
sg_free_table(&rctx->src_tbl);
diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index d485762a3fdc..ff96f6ba1fc5 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -454,7 +454,9 @@ static int qce_setup_regs_aead(struct crypto_async_request *async_req)
unsigned long flags = rctx->flags;
u32 encr_cfg, auth_cfg, config, totallen;
u32 iv_last_word;
+ int ret;
+ qce_clear_bam_transaction(qce);
qce_setup_config(qce);
/* Write encryption key */
@@ -467,12 +469,12 @@ static int qce_setup_regs_aead(struct crypto_async_request *async_req)
if (IS_CCM(rctx->flags)) {
iv_last_word = enciv[enciv_words - 1];
- qce_write(qce, REG_CNTR3_IV3, iv_last_word + 1);
+ qce_write_reg_dma(qce, REG_CNTR3_IV3, iv_last_word + 1, 1);
qce_write_array(qce, REG_ENCR_CCM_INT_CNTR0, (u32 *)enciv, enciv_words);
- qce_write(qce, REG_CNTR_MASK, ~0);
- qce_write(qce, REG_CNTR_MASK0, ~0);
- qce_write(qce, REG_CNTR_MASK1, ~0);
- qce_write(qce, REG_CNTR_MASK2, ~0);
+ qce_write_reg_dma(qce, REG_CNTR_MASK, ~0, 1);
+ qce_write_reg_dma(qce, REG_CNTR_MASK0, ~0, 1);
+ qce_write_reg_dma(qce, REG_CNTR_MASK1, ~0, 1);
+ qce_write_reg_dma(qce, REG_CNTR_MASK2, ~0, 1);
}
/* Clear authentication IV and KEY registers of previous values */
@@ -508,7 +510,7 @@ static int qce_setup_regs_aead(struct crypto_async_request *async_req)
encr_cfg = qce_encr_cfg(flags, enc_keylen);
if (IS_ENCRYPT(flags))
encr_cfg |= BIT(ENCODE_SHIFT);
- qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_CFG, encr_cfg, 1);
/* Set up AUTH_SEG_CFG */
auth_cfg = qce_auth_cfg(rctx->flags, auth_keylen, ctx->authsize);
@@ -525,34 +527,40 @@ static int qce_setup_regs_aead(struct crypto_async_request *async_req)
else
auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT;
}
- qce_write(qce, REG_AUTH_SEG_CFG, auth_cfg);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_CFG, auth_cfg, 1);
totallen = rctx->cryptlen + rctx->assoclen;
/* Set the encryption size and start offset */
if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
- qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen + ctx->authsize);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen + ctx->authsize, 1);
else
- qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen);
- qce_write(qce, REG_ENCR_SEG_START, rctx->assoclen & 0xffff);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen, 1);
+ qce_write_reg_dma(qce, REG_ENCR_SEG_START, rctx->assoclen & 0xffff, 1);
/* Set the authentication size and start offset */
- qce_write(qce, REG_AUTH_SEG_SIZE, totallen);
- qce_write(qce, REG_AUTH_SEG_START, 0);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_SIZE, totallen, 1);
+ qce_write_reg_dma(qce, REG_AUTH_SEG_START, 0, 1);
/* Write total length */
if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
- qce_write(qce, REG_SEG_SIZE, totallen + ctx->authsize);
+ qce_write_reg_dma(qce, REG_SEG_SIZE, totallen + ctx->authsize, 1);
else
- qce_write(qce, REG_SEG_SIZE, totallen);
+ qce_write_reg_dma(qce, REG_SEG_SIZE, totallen, 1);
/* get little endianness */
config = qce_config_reg(qce, 1);
- qce_write(qce, REG_CONFIG, config);
+ qce_write_reg_dma(qce, REG_CONFIG, config, 1);
/* Start the process */
qce_crypto_go(qce, !IS_CCM(flags));
+ ret = qce_submit_cmd_desc(qce, 0);
+ if (ret) {
+ dev_err(qce->dev, "Error in aead cmd descriptor\n");
+ return ret;
+ }
+
return 0;
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH 06/11] drivers: bam_dma: Add LOCK & UNLOCK flag support
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (4 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 05/11] crypto: qce - Convert register r/w for aead " Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-14 11:42 ` [PATCH 07/11] crypto: qce - Add LOCK and " Md Sadre Alam
` (4 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Add lock and unlock flag support on command descriptor.
Once lock set in requester pipe, then the bam controller
will lock all others pipe and process the request only
from requester pipe. Unlocking only can be performed from
the same pipe.
If DMA_PREP_LOCK flag passed in command descriptor then requester
of this transaction wanted to lock the BAM controller for this
transaction so BAM driver should set LOCK bit for the HW descriptor.
If DMA_PREP_UNLOCK flag passed in command descriptor then requester
of this transaction wanted to unlock the BAM controller.so BAM driver
should set UNLOCK bit for the HW descriptor.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/dma/qcom/bam_dma.c | 10 ++++++++++
include/linux/dma/qcom_bam_dma.h | 2 ++
2 files changed, 12 insertions(+)
diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
index 5e7d332731e0..146d78af3731 100644
--- a/drivers/dma/qcom/bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -41,6 +41,7 @@
#include <linux/clk.h>
#include <linux/dmaengine.h>
#include <linux/pm_runtime.h>
+#include <linux/dma/qcom_bam_dma.h>
#include "../dmaengine.h"
#include "../virt-dma.h"
@@ -58,6 +59,8 @@ struct bam_desc_hw {
#define DESC_FLAG_EOB BIT(13)
#define DESC_FLAG_NWD BIT(12)
#define DESC_FLAG_CMD BIT(11)
+#define DESC_FLAG_LOCK BIT(10)
+#define DESC_FLAG_UNLOCK BIT(9)
struct bam_async_desc {
struct virt_dma_desc vd;
@@ -686,6 +689,13 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
/* fill in temporary descriptors */
desc = async_desc->desc;
+ if (flags & DMA_PREP_CMD) {
+ if (flags & DMA_PREP_LOCK)
+ desc->flags |= cpu_to_le16(DESC_FLAG_LOCK);
+ if (flags & DMA_PREP_UNLOCK)
+ desc->flags |= cpu_to_le16(DESC_FLAG_UNLOCK);
+ }
+
for_each_sg(sgl, sg, sg_len, i) {
unsigned int remainder = sg_dma_len(sg);
unsigned int curr_offset = 0;
diff --git a/include/linux/dma/qcom_bam_dma.h b/include/linux/dma/qcom_bam_dma.h
index 68fc0e643b1b..bc619c44ce82 100644
--- a/include/linux/dma/qcom_bam_dma.h
+++ b/include/linux/dma/qcom_bam_dma.h
@@ -8,6 +8,8 @@
#include <asm/byteorder.h>
+#define DMA_PREP_LOCK BIT(0)
+#define DMA_PREP_UNLOCK BIT(1)
/*
* This data type corresponds to the native Command Element
* supported by BAM DMA Engine.
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH 07/11] crypto: qce - Add LOCK and UNLOCK flag support
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (5 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 06/11] drivers: bam_dma: Add LOCK & UNLOCK flag support Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-14 11:42 ` [PATCH 08/11] crypto: qce - Add support for lock aquire,lock release api Md Sadre Alam
` (3 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Add LOCK and UNLOCK flag support while preapring
command descriptor for writing crypto register.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/dma.c | 7 ++++++-
drivers/crypto/qce/dma.h | 2 ++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/qce/dma.c b/drivers/crypto/qce/dma.c
index 85c8d4107afa..bda60e6bc4b3 100644
--- a/drivers/crypto/qce/dma.c
+++ b/drivers/crypto/qce/dma.c
@@ -71,7 +71,12 @@ int qce_submit_cmd_desc(struct qce_device *qce, unsigned long flags)
unsigned long desc_flags;
int ret = 0;
- desc_flags = DMA_PREP_CMD;
+ if (flags & QCE_DMA_DESC_FLAG_LOCK)
+ desc_flags = DMA_PREP_CMD | DMA_PREP_LOCK;
+ else if (flags & QCE_DMA_DESC_FLAG_UNLOCK)
+ desc_flags = DMA_PREP_CMD | DMA_PREP_UNLOCK;
+ else
+ desc_flags = DMA_PREP_CMD;
/* For command descriptor always use consumer pipe
* it recomended as per HPG
diff --git a/drivers/crypto/qce/dma.h b/drivers/crypto/qce/dma.h
index f10991590b3f..ad8a18a720b1 100644
--- a/drivers/crypto/qce/dma.h
+++ b/drivers/crypto/qce/dma.h
@@ -19,6 +19,8 @@
#define QCE_BAM_CMD_ELEMENT_SIZE 64
#define QCE_DMA_DESC_FLAG_BAM_NWD (0x0004)
#define QCE_MAX_REG_READ 8
+#define QCE_DMA_DESC_FLAG_LOCK (0x0002)
+#define QCE_DMA_DESC_FLAG_UNLOCK (0x0001)
struct qce_result_dump {
u32 auth_iv[QCE_AUTHIV_REGS_CNT];
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH 08/11] crypto: qce - Add support for lock aquire,lock release api.
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (6 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 07/11] crypto: qce - Add LOCK and " Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-14 11:42 ` [PATCH 09/11] crypto: qce - Add support for lock/unlock in skcipher Md Sadre Alam
` (2 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Add support for lock acquire and lock release api.
When multiple EE's(Execution Environment) want to access
CE5 then there will be race condition b/w multiple EE's.
Since each EE's having their dedicated BAM pipe, BAM allows
Locking and Unlocking on BAM pipe. So if one EE's requesting
for CE5 access then that EE's first has to LOCK the BAM pipe
while setting LOCK bit on command descriptor and then access
it. After finishing the request EE's has to UNLOCK the BAM pipe
so in this way we race condition will not happen.
Added these two API qce_bam_acquire_lock() and qce_bam_release_lock()
for the same.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/common.c | 38 +++++++++++++++++++++++++++++++++++++
drivers/crypto/qce/core.h | 2 ++
2 files changed, 40 insertions(+)
diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index ff96f6ba1fc5..d3b461331b24 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -617,3 +617,41 @@ void qce_get_version(struct qce_device *qce, u32 *major, u32 *minor, u32 *step)
*minor = (val & CORE_MINOR_REV_MASK) >> CORE_MINOR_REV_SHIFT;
*step = (val & CORE_STEP_REV_MASK) >> CORE_STEP_REV_SHIFT;
}
+
+int qce_bam_acquire_lock(struct qce_device *qce)
+{
+ u32 val = 0;
+ int ret;
+
+ qce_clear_bam_transaction(qce);
+
+ /* This is just a dummy read to acquire lock bam pipe */
+ qce_read_reg_dma(qce, REG_STATUS2, &val, 1);
+
+ ret = qce_submit_cmd_desc(qce, QCE_DMA_DESC_FLAG_LOCK);
+ if (ret) {
+ dev_err(qce->dev, "Error in LOCK cmd descriptor\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+int qce_bam_release_lock(struct qce_device *qce)
+{
+ u32 val = 0;
+ int ret;
+
+ qce_clear_bam_transaction(qce);
+
+ /* This just dummy read to release lock on bam pipe*/
+ qce_read_reg_dma(qce, REG_STATUS2, &val, 1);
+
+ ret = qce_submit_cmd_desc(qce, QCE_DMA_DESC_FLAG_UNLOCK);
+ if (ret) {
+ dev_err(qce->dev, "Error in LOCK cmd descriptor\n");
+ return ret;
+ }
+
+ return 0;
+}
diff --git a/drivers/crypto/qce/core.h b/drivers/crypto/qce/core.h
index bf28dedd1509..d01d810b60ad 100644
--- a/drivers/crypto/qce/core.h
+++ b/drivers/crypto/qce/core.h
@@ -68,4 +68,6 @@ int qce_read_reg_dma(struct qce_device *qce, unsigned int offset, void *buff,
void qce_clear_bam_transaction(struct qce_device *qce);
int qce_submit_cmd_desc(struct qce_device *qce, unsigned long flags);
struct qce_bam_transaction *qce_alloc_bam_txn(struct qce_dma_data *dma);
+int qce_bam_acquire_lock(struct qce_device *qce);
+int qce_bam_release_lock(struct qce_device *qce);
#endif /* _CORE_H_ */
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH 09/11] crypto: qce - Add support for lock/unlock in skcipher
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (7 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 08/11] crypto: qce - Add support for lock aquire,lock release api Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-14 11:42 ` [PATCH 10/11] crypto: qce - Add support for lock/unlock in sha Md Sadre Alam
2023-12-14 11:42 ` [PATCH 11/11] crypto: qce - Add support for lock/unlock in aead Md Sadre Alam
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Add support for lock/unlock on bam pipe in skcipher.
If multiple EE's(Execution Environment) try to access
the same crypto engine then before accessing the crypto
engine EE's has to lock the bam pipe and then submit the
request to crypto engine. Once request done then EE's has
to unlock the bam pipe so that others EE's can access the
crypto engine.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/skcipher.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
index fa7ee5db9aa0..c74df30e8e31 100644
--- a/drivers/crypto/qce/skcipher.c
+++ b/drivers/crypto/qce/skcipher.c
@@ -42,6 +42,8 @@ static void qce_skcipher_done(void *data)
dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL;
+ qce_bam_release_lock(qce);
+
error = qce_dma_terminate_all(&qce->dma);
if (error)
dev_dbg(qce->dev, "skcipher dma termination error (%d)\n",
@@ -94,6 +96,8 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL;
+ qce_bam_acquire_lock(qce);
+
rctx->src_nents = sg_nents_for_len(req->src, req->cryptlen);
if (diff_dst)
rctx->dst_nents = sg_nents_for_len(req->dst, req->cryptlen);
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH 10/11] crypto: qce - Add support for lock/unlock in sha
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (8 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 09/11] crypto: qce - Add support for lock/unlock in skcipher Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
2023-12-14 11:42 ` [PATCH 11/11] crypto: qce - Add support for lock/unlock in aead Md Sadre Alam
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Add support for lock/unlock on bam pipe in sha.
If multiple EE's(Execution Environment) try to access
the same crypto engine then before accessing the crypto
engine EE's has to lock the bam pipe and then submit the
request to crypto engine. Once request done then EE's has
to unlock the bam pipe so that others EE's can access the
crypto engine.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/sha.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
index f850c6206a31..942aecbb0736 100644
--- a/drivers/crypto/qce/sha.c
+++ b/drivers/crypto/qce/sha.c
@@ -47,6 +47,8 @@ static void qce_ahash_done(void *data)
int error;
u32 status;
+ qce_bam_release_lock(qce);
+
error = qce_dma_terminate_all(&qce->dma);
if (error)
dev_dbg(qce->dev, "ahash dma termination error (%d)\n", error);
@@ -102,6 +104,8 @@ static int qce_ahash_async_req_handle(struct crypto_async_request *async_req)
rctx->authklen = AES_KEYSIZE_128;
}
+ qce_bam_acquire_lock(qce);
+
rctx->src_nents = sg_nents_for_len(req->src, req->nbytes);
if (rctx->src_nents < 0) {
dev_err(qce->dev, "Invalid numbers of src SG.\n");
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH 11/11] crypto: qce - Add support for lock/unlock in aead
2023-12-14 11:42 [PATCH 00/11] Add cmd descriptor support Md Sadre Alam
` (9 preceding siblings ...)
2023-12-14 11:42 ` [PATCH 10/11] crypto: qce - Add support for lock/unlock in sha Md Sadre Alam
@ 2023-12-14 11:42 ` Md Sadre Alam
10 siblings, 0 replies; 17+ messages in thread
From: Md Sadre Alam @ 2023-12-14 11:42 UTC (permalink / raw)
To: thara.gopinath, herbert, davem, agross, andersson, konrad.dybcio,
vkoul, linux-crypto, linux-arm-msm, linux-kernel, dmaengine,
quic_srichara, quic_varada
Cc: quic_mdalam
Add support for lock/unlock on bam pipe in aead.
If multiple EE's(Execution Environment) try to access
the same crypto engine then before accessing the crypto
engine EE's has to lock the bam pipe and then submit the
request to crypto engine. Once request done then EE's has
to unlock the bam pipe so that others EE's can access the
crypto engine.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
---
drivers/crypto/qce/aead.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
index c03600f396be..0948c30ea515 100644
--- a/drivers/crypto/qce/aead.c
+++ b/drivers/crypto/qce/aead.c
@@ -42,6 +42,8 @@ static void qce_aead_done(void *data)
dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL;
+ qce_bam_release_lock(qce);
+
error = qce_dma_terminate_all(&qce->dma);
if (error)
dev_dbg(qce->dev, "aead dma termination error (%d)\n",
@@ -445,6 +447,8 @@ qce_aead_async_req_handle(struct crypto_async_request *async_req)
else
rctx->assoclen = req->assoclen;
+ qce_bam_acquire_lock(qce);
+
diff_dst = (req->src != req->dst) ? true : false;
dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread