public inbox for u-boot@lists.denx.de
 help / color / mirror / Atom feed
* [U-Boot] [PATCH 1/1] nvme: Fix PRP Offset Invalid
@ 2019-08-21  0:34 Aaron Williams
  2019-08-21  7:55 ` Bin Meng
  0 siblings, 1 reply; 24+ messages in thread
From: Aaron Williams @ 2019-08-21  0:34 UTC (permalink / raw)
  To: u-boot

When large writes take place I saw a Samsung
EVO 970+ return a status value of 0x13, PRP
Offset Invalid.  I tracked this down to the
improper handling of PRP entries.  The blocks
the PRP entries are placed in cannot cross a
page boundary and thus should be allocated on
page boundaries.  This is how the Linux kernel
driver works.

With this patch, the PRP pool is allocated on
a page boundary and other than the very first
allocation, the pool size is a multiple of
the page size.  Each page can hold (4096 / 8) - 1
entries since the last entry must point to the
next page in the pool.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
 drivers/nvme/nvme.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..ae64459edf 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -75,6 +75,8 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
 	int length = total_len;
 	int i, nprps;
 	length -= (page_size - offset);
+	u32 prps_per_page = (page_size >> 3) - 1;
+	u32 num_pages;

 	if (length <= 0) {
 		*prp2 = 0;
@@ -90,15 +92,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
 	}

 	nprps = DIV_ROUND_UP(length, page_size);
+	num_pages = (nprps + prps_per_page - 1) / prps_per_page;

 	if (nprps > dev->prp_entry_num) {
 		free(dev->prp_pool);
-		dev->prp_pool = malloc(nprps << 3);
+		dev->prp_pool = memalign(page_size, num_pages * page_size);
 		if (!dev->prp_pool) {
 			printf("Error: malloc prp_pool fail\n");
 			return -ENOMEM;
 		}
-		dev->prp_entry_num = nprps;
+		dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
 	}

 	prp_pool = dev->prp_pool;
@@ -791,7 +794,7 @@ static int nvme_probe(struct udevice *udev)
 	}
 	memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));

-	ndev->prp_pool = malloc(MAX_PRP_POOL);
+	ndev->prp_pool = memalign(1 << 12, MAX_PRP_POOL);
 	if (!ndev->prp_pool) {
 		ret = -ENOMEM;
 		printf("Error: %s: Out of memory!\n", udev->name);
--
2.16.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
@ 2019-08-21 14:09 Aaron Williams
  2019-08-22  1:40 ` Bin Meng
  0 siblings, 1 reply; 24+ messages in thread
From: Aaron Williams @ 2019-08-21 14:09 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <aaron.williams@cavium.com>

When large writes take place I saw a Samsung EVO 970+ return a status
value of 0x13, PRP Offset Invalid.  I tracked this down to the
improper handling of PRP entries.  The blocks the PRP entries are
placed in cannot cross a page boundary and thus should be allocated
on page boundaries.  This is how the Linux kernel driver works.

With this patch, the PRP pool is allocated on a page boundary and
other than the very first allocation, the pool size is a multiple of
the page size.  Each page can hold (4096 / 8) - 1 entries since the
last entry must point to the next page in the pool.

Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
 drivers/nvme/nvme.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..71ea226820 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -74,6 +74,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
 	u64 *prp_pool;
 	int length = total_len;
 	int i, nprps;
+	u32 prps_per_page = (page_size >> 3) - 1;
+	u32 num_pages;
+
 	length -= (page_size - offset);
 
 	if (length <= 0) {
@@ -90,15 +93,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
 	}
 
 	nprps = DIV_ROUND_UP(length, page_size);
+	num_pages = DIV_ROUND_UP(nprps, prps_per_page);
 
 	if (nprps > dev->prp_entry_num) {
 		free(dev->prp_pool);
-		dev->prp_pool = malloc(nprps << 3);
+		dev->prp_pool = memalign(page_size, num_pages * page_size);
 		if (!dev->prp_pool) {
 			printf("Error: malloc prp_pool fail\n");
 			return -ENOMEM;
 		}
-		dev->prp_entry_num = nprps;
+		dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
 	}
 
 	prp_pool = dev->prp_pool;
@@ -791,12 +795,6 @@ static int nvme_probe(struct udevice *udev)
 	}
 	memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
 
-	ndev->prp_pool = malloc(MAX_PRP_POOL);
-	if (!ndev->prp_pool) {
-		ret = -ENOMEM;
-		printf("Error: %s: Out of memory!\n", udev->name);
-		goto free_nvme;
-	}
 	ndev->prp_entry_num = MAX_PRP_POOL >> 3;
 
 	ndev->cap = nvme_readq(&ndev->bar->cap);
@@ -808,6 +806,13 @@ static int nvme_probe(struct udevice *udev)
 	if (ret)
 		goto free_queue;
 
+	ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL);
+	if (!ndev->prp_pool) {
+		ret = -ENOMEM;
+		printf("Error: %s: Out of memory!\n", udev->name);
+		goto free_nvme;
+	}
+
 	ret = nvme_setup_io_queues(ndev);
 	if (ret)
 		goto free_queue;
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
@ 2019-08-21 13:40 Aaron Williams
  0 siblings, 0 replies; 24+ messages in thread
From: Aaron Williams @ 2019-08-21 13:40 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <aaron.williams@cavium.com>

When large writes take place I saw a Samsung EVO 970+ return a status
value of 0x13, PRP Offset Invalid.  I tracked this down to the
improper handling of PRP entries.  The blocks the PRP entries are
placed in cannot cross a page boundary and thus should be allocated
on page boundaries.  This is how the Linux kernel driver works.

With this patch, the PRP pool is allocated on a page boundary and
other than the very first allocation, the pool size is a multiple of
the page size.  Each page can hold (4096 / 8) - 1 entries since the
last entry must point to the next page in the pool.

Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
 drivers/nvme/nvme.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..ae64459edf 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -75,6 +75,8 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
 	int length = total_len;
 	int i, nprps;
 	length -= (page_size - offset);
+	u32 prps_per_page = (page_size >> 3) - 1;
+	u32 num_pages;
 
 	if (length <= 0) {
 		*prp2 = 0;
@@ -90,15 +92,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
 	}
 
 	nprps = DIV_ROUND_UP(length, page_size);
+	num_pages = (nprps + prps_per_page - 1) / prps_per_page;
 
 	if (nprps > dev->prp_entry_num) {
 		free(dev->prp_pool);
-		dev->prp_pool = malloc(nprps << 3);
+		dev->prp_pool = memalign(page_size, num_pages * page_size);
 		if (!dev->prp_pool) {
 			printf("Error: malloc prp_pool fail\n");
 			return -ENOMEM;
 		}
-		dev->prp_entry_num = nprps;
+		dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
 	}
 
 	prp_pool = dev->prp_pool;
@@ -791,7 +794,7 @@ static int nvme_probe(struct udevice *udev)
 	}
 	memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
 
-	ndev->prp_pool = malloc(MAX_PRP_POOL);
+	ndev->prp_pool = memalign(1 << 12, MAX_PRP_POOL);
 	if (!ndev->prp_pool) {
 		ret = -ENOMEM;
 		printf("Error: %s: Out of memory!\n", udev->name);
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
@ 2019-08-20  7:18 Aaron Williams
  0 siblings, 0 replies; 24+ messages in thread
From: Aaron Williams @ 2019-08-20  7:18 UTC (permalink / raw)
  To: u-boot

When large writes take place I saw a Samsung
EVO 970+ return a status value of 0x13, PRP
Offset Invalid.  I tracked this down to the
improper handling of PRP entries.  The blocks
the PRP entries are placed in cannot cross a
page boundary and thus should be allocated on
page boundaries.  This is how the Linux kernel
driver works.

With this patch, the PRP pool is allocated on
a page boundary and other than the very first
allocation, the pool size is a multiple of
the page size.  Each page can hold (4096 / 8) - 1
entries since the last entry must point to the
next page in the pool.

Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
 drivers/nvme/nvme.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..c94a6d0654 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -75,6 +75,8 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
  int length = total_len;
  int i, nprps;
  length -= (page_size - offset);
+ u32 prps_per_page = (page_size >> 3) - 1;
+ u32 num_pages;

  if (length <= 0) {
   *prp2 = 0;
@@ -90,15 +92,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
  }

  nprps = DIV_ROUND_UP(length, page_size);
+ num_pages = (nprps + prps_per_page - 1) / prps_per_page;

  if (nprps > dev->prp_entry_num) {
   free(dev->prp_pool);
-  dev->prp_pool = malloc(nprps << 3);
+  dev->prp_pool = memalign(page_size, num_pages * page_size);
   if (!dev->prp_pool) {
    printf("Error: malloc prp_pool fail\n");
    return -ENOMEM;
   }
-  dev->prp_entry_num = nprps;
+  dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
  }

  prp_pool = dev->prp_pool;
@@ -115,6 +118,7 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
   nprps--;
  }
  *prp2 = (ulong)dev->prp_pool;
+ flush_dcache_range(*prp2, *prp2 + (num_pages * page_size));

  return 0;
 }
@@ -791,7 +795,7 @@ static int nvme_probe(struct udevice *udev)
  }
  memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));

- ndev->prp_pool = malloc(MAX_PRP_POOL);
+ ndev->prp_pool = memalign(1 << 12, MAX_PRP_POOL);
  if (!ndev->prp_pool) {
   ret = -ENOMEM;
   printf("Error: %s: Out of memory!\n", udev->name);
--
2.16.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-08-27  0:19 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-08-21  0:34 [U-Boot] [PATCH 1/1] nvme: Fix PRP Offset Invalid Aaron Williams
2019-08-21  7:55 ` Bin Meng
2019-08-21 11:23   ` [U-Boot] [PATCH 1/1][nvme] " Aaron Williams
2019-08-21 11:23     ` [U-Boot] [PATCH] nvme: " Aaron Williams
2019-08-21 11:26   ` [U-Boot] [EXT] Re: [PATCH 1/1] " Aaron Williams
2019-08-21 15:23     ` Bin Meng
2019-08-21 22:06       ` Aaron Williams
2019-08-22  1:38         ` Bin Meng
2019-08-22  9:12         ` [U-Boot] [PATCH] " Aaron Williams
2019-08-22  9:17           ` Aaron Williams
2019-08-22 14:25           ` Bin Meng
2019-08-22 18:05             ` [U-Boot] [PATCH v3 1/1] " Aaron Williams
2019-08-22 18:05               ` [U-Boot] [PATCH v3 0/1] nvme: Fix invalid PRP Offset Aaron Williams
2019-08-22 18:05               ` [U-Boot] [PATCH v3 1/1] nvme: Fix PRP Offset Invalid Aaron Williams
2019-08-23  3:24                 ` Bin Meng
2019-08-23  3:37                   ` [U-Boot] [PATCH v4 " Aaron Williams
2019-08-23  3:37                     ` [U-Boot] [PATCH v4 0/1] " Aaron Williams
2019-08-23  3:37                     ` [U-Boot] [PATCH v4 1/1] " Aaron Williams
2019-08-27  0:19                     ` Tom Rini
  -- strict thread matches above, loose matches on Subject: below --
2019-08-21 14:09 [U-Boot] [PATCH] " Aaron Williams
2019-08-22  1:40 ` Bin Meng
2019-08-22  2:48   ` Aaron Williams
2019-08-21 13:40 Aaron Williams
2019-08-20  7:18 Aaron Williams

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox