* [PATCH 01/13] nvme: don't call nvme_init_ctrl_finish from nvme_passthru_end
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-14 6:25 ` Sagi Grimberg
2022-11-13 16:11 ` [PATCH 02/13] nvme: move OPAL setup from PCIe to core Christoph Hellwig
` (13 subsequent siblings)
14 siblings, 1 reply; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
nvme_passthrough_end can race with a reset, which can lead to
racing stores to the cels xarray as well as further shengians
with upcoming more complicated initialization.
So drop the call and just log that the controller capabilities
might have changed and a reset could be required to use the new
controller capabilities.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/core.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 1a87a072fbed3..ce8314aee1ddf 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1123,8 +1123,10 @@ void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects,
mutex_unlock(&ctrl->subsys->lock);
mutex_unlock(&ctrl->scan_lock);
}
- if (effects & NVME_CMD_EFFECTS_CCC)
- nvme_init_ctrl_finish(ctrl);
+ if (effects & NVME_CMD_EFFECTS_CCC) {
+ dev_info(ctrl->device,
+"controller capabilities changed, reset may be required to take effect.\n");
+ }
if (effects & (NVME_CMD_EFFECTS_NIC | NVME_CMD_EFFECTS_NCC)) {
nvme_queue_scan(ctrl);
flush_work(&ctrl->scan_work);
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 02/13] nvme: move OPAL setup from PCIe to core
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
2022-11-13 16:11 ` [PATCH 01/13] nvme: don't call nvme_init_ctrl_finish from nvme_passthru_end Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-14 16:37 ` James Smart
2022-11-21 10:28 ` Sagi Grimberg
2022-11-13 16:11 ` [PATCH 03/13] nvme: simplify transport specific device attribute handling Christoph Hellwig
` (12 subsequent siblings)
14 siblings, 2 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Nothing about the TCG Opal support is PCIe transport specific, so move it
to the core code. For this nvme_init_ctrl_finish grows a new
was_suspended argument that allows the transport driver to tell the OPAL
code if the controller came out of a suspend cycle.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/apple.c | 2 +-
drivers/nvme/host/core.c | 25 ++++++++++++++++++++++---
drivers/nvme/host/fc.c | 2 +-
drivers/nvme/host/nvme.h | 5 +----
drivers/nvme/host/pci.c | 14 +-------------
drivers/nvme/host/rdma.c | 2 +-
drivers/nvme/host/tcp.c | 2 +-
drivers/nvme/target/loop.c | 2 +-
8 files changed, 29 insertions(+), 25 deletions(-)
diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
index 24e224c279a41..a85349a7e938c 100644
--- a/drivers/nvme/host/apple.c
+++ b/drivers/nvme/host/apple.c
@@ -1102,7 +1102,7 @@ static void apple_nvme_reset_work(struct work_struct *work)
goto out;
}
- ret = nvme_init_ctrl_finish(&anv->ctrl);
+ ret = nvme_init_ctrl_finish(&anv->ctrl, false);
if (ret)
goto out;
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ce8314aee1ddf..aedacf2fba69e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2192,7 +2192,7 @@ const struct pr_ops nvme_pr_ops = {
};
#ifdef CONFIG_BLK_SED_OPAL
-int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
+static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
bool send)
{
struct nvme_ctrl *ctrl = data;
@@ -2209,7 +2209,23 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
NVME_QID_ANY, 1, 0);
}
-EXPORT_SYMBOL_GPL(nvme_sec_submit);
+
+static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
+{
+ if (ctrl->oacs & NVME_CTRL_OACS_SEC_SUPP) {
+ if (!ctrl->opal_dev)
+ ctrl->opal_dev = init_opal_dev(ctrl, &nvme_sec_submit);
+ else if (was_suspended)
+ opal_unlock_from_suspend(ctrl->opal_dev);
+ } else {
+ free_opal_dev(ctrl->opal_dev);
+ ctrl->opal_dev = NULL;
+ }
+}
+#else
+static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
+{
+}
#endif /* CONFIG_BLK_SED_OPAL */
#ifdef CONFIG_BLK_DEV_ZONED
@@ -3242,7 +3258,7 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
* register in our nvme_ctrl structure. This should be called as soon as
* the admin queue is fully up and running.
*/
-int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
+int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl, bool was_suspended)
{
int ret;
@@ -3273,6 +3289,8 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
if (ret < 0)
return ret;
+ nvme_configure_opal(ctrl, was_suspended);
+
if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) {
/*
* Do not return errors unless we are in a controller reset,
@@ -5007,6 +5025,7 @@ static void nvme_free_ctrl(struct device *dev)
nvme_auth_stop(ctrl);
nvme_auth_free(ctrl);
__free_page(ctrl->discard_page);
+ free_opal_dev(ctrl->opal_dev);
if (subsys) {
mutex_lock(&nvme_subsystems_lock);
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 2d3c54838496f..1f9f4075794b5 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3107,7 +3107,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
nvme_start_admin_queue(&ctrl->ctrl);
- ret = nvme_init_ctrl_finish(&ctrl->ctrl);
+ ret = nvme_init_ctrl_finish(&ctrl->ctrl, false);
if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))
goto out_disconnect_admin_queue;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 16b34a4914959..306a120d49ab9 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -736,7 +736,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
void nvme_uninit_ctrl(struct nvme_ctrl *ctrl);
void nvme_start_ctrl(struct nvme_ctrl *ctrl);
void nvme_stop_ctrl(struct nvme_ctrl *ctrl);
-int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl);
+int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl, bool was_suspended);
int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
const struct blk_mq_ops *ops, unsigned int flags,
unsigned int cmd_size);
@@ -748,9 +748,6 @@ void nvme_remove_io_tag_set(struct nvme_ctrl *ctrl);
void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
-int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
- bool send);
-
void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
volatile union nvme_result *res);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 208c387f1558d..e4f084e12b966 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2772,7 +2772,6 @@ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
nvme_free_tagset(dev);
if (dev->ctrl.admin_q)
blk_put_queue(dev->ctrl.admin_q);
- free_opal_dev(dev->ctrl.opal_dev);
mempool_destroy(dev->iod_mempool);
put_device(dev->dev);
kfree(dev->queues);
@@ -2866,21 +2865,10 @@ static void nvme_reset_work(struct work_struct *work)
*/
dev->ctrl.max_integrity_segments = 1;
- result = nvme_init_ctrl_finish(&dev->ctrl);
+ result = nvme_init_ctrl_finish(&dev->ctrl, was_suspend);
if (result)
goto out;
- if (dev->ctrl.oacs & NVME_CTRL_OACS_SEC_SUPP) {
- if (!dev->ctrl.opal_dev)
- dev->ctrl.opal_dev =
- init_opal_dev(&dev->ctrl, &nvme_sec_submit);
- else if (was_suspend)
- opal_unlock_from_suspend(dev->ctrl.opal_dev);
- } else {
- free_opal_dev(dev->ctrl.opal_dev);
- dev->ctrl.opal_dev = NULL;
- }
-
if (dev->ctrl.oacs & NVME_CTRL_OACS_DBBUF_SUPP) {
result = nvme_dbbuf_dma_alloc(dev);
if (result)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 6e079abb22ee9..ccd45e5b32986 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -871,7 +871,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
nvme_start_admin_queue(&ctrl->ctrl);
- error = nvme_init_ctrl_finish(&ctrl->ctrl);
+ error = nvme_init_ctrl_finish(&ctrl->ctrl, false);
if (error)
goto out_quiesce_queue;
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 1eed0fc26b3ae..4f8584657bb75 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1949,7 +1949,7 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new)
nvme_start_admin_queue(ctrl);
- error = nvme_init_ctrl_finish(ctrl);
+ error = nvme_init_ctrl_finish(ctrl, false);
if (error)
goto out_quiesce_queue;
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index b45fe3adf015f..893c50f365c4d 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -377,7 +377,7 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
nvme_start_admin_queue(&ctrl->ctrl);
- error = nvme_init_ctrl_finish(&ctrl->ctrl);
+ error = nvme_init_ctrl_finish(&ctrl->ctrl, false);
if (error)
goto out_cleanup_tagset;
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* Re: [PATCH 02/13] nvme: move OPAL setup from PCIe to core
2022-11-13 16:11 ` [PATCH 02/13] nvme: move OPAL setup from PCIe to core Christoph Hellwig
@ 2022-11-14 16:37 ` James Smart
2022-11-21 10:28 ` Sagi Grimberg
1 sibling, 0 replies; 22+ messages in thread
From: James Smart @ 2022-11-14 16:37 UTC (permalink / raw)
To: Christoph Hellwig, Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
On 11/13/2022 8:11 AM, Christoph Hellwig wrote:
> Nothing about the TCG Opal support is PCIe transport specific, so move it
> to the core code. For this nvme_init_ctrl_finish grows a new
> was_suspended argument that allows the transport driver to tell the OPAL
> code if the controller came out of a suspend cycle.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Keith Busch <kbusch@kernel.org>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
> ---
> drivers/nvme/host/apple.c | 2 +-
> drivers/nvme/host/core.c | 25 ++++++++++++++++++++++---
> drivers/nvme/host/fc.c | 2 +-
> drivers/nvme/host/nvme.h | 5 +----
> drivers/nvme/host/pci.c | 14 +-------------
> drivers/nvme/host/rdma.c | 2 +-
> drivers/nvme/host/tcp.c | 2 +-
> drivers/nvme/target/loop.c | 2 +-
> 8 files changed, 29 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
> index 24e224c279a41..a85349a7e938c 100644
Reviewed-by: James Smart <jsmart2021@gmail.com>
-- james
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 02/13] nvme: move OPAL setup from PCIe to core
2022-11-13 16:11 ` [PATCH 02/13] nvme: move OPAL setup from PCIe to core Christoph Hellwig
2022-11-14 16:37 ` James Smart
@ 2022-11-21 10:28 ` Sagi Grimberg
1 sibling, 0 replies; 22+ messages in thread
From: Sagi Grimberg @ 2022-11-21 10:28 UTC (permalink / raw)
To: Christoph Hellwig, Keith Busch
Cc: Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
On 11/13/22 18:11, Christoph Hellwig wrote:
> Nothing about the TCG Opal support is PCIe transport specific, so move it
> to the core code. For this nvme_init_ctrl_finish grows a new
> was_suspended argument that allows the transport driver to tell the OPAL
> code if the controller came out of a suspend cycle.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Keith Busch <kbusch@kernel.org>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
> ---
> drivers/nvme/host/apple.c | 2 +-
> drivers/nvme/host/core.c | 25 ++++++++++++++++++++++---
> drivers/nvme/host/fc.c | 2 +-
> drivers/nvme/host/nvme.h | 5 +----
> drivers/nvme/host/pci.c | 14 +-------------
> drivers/nvme/host/rdma.c | 2 +-
> drivers/nvme/host/tcp.c | 2 +-
> drivers/nvme/target/loop.c | 2 +-
> 8 files changed, 29 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
> index 24e224c279a41..a85349a7e938c 100644
> --- a/drivers/nvme/host/apple.c
> +++ b/drivers/nvme/host/apple.c
> @@ -1102,7 +1102,7 @@ static void apple_nvme_reset_work(struct work_struct *work)
> goto out;
> }
>
> - ret = nvme_init_ctrl_finish(&anv->ctrl);
> + ret = nvme_init_ctrl_finish(&anv->ctrl, false);
> if (ret)
> goto out;
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index ce8314aee1ddf..aedacf2fba69e 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2192,7 +2192,7 @@ const struct pr_ops nvme_pr_ops = {
> };
>
> #ifdef CONFIG_BLK_SED_OPAL
> -int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
> +static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
> bool send)
> {
> struct nvme_ctrl *ctrl = data;
> @@ -2209,7 +2209,23 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
> return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
> NVME_QID_ANY, 1, 0);
> }
> -EXPORT_SYMBOL_GPL(nvme_sec_submit);
> +
> +static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
> +{
> + if (ctrl->oacs & NVME_CTRL_OACS_SEC_SUPP) {
> + if (!ctrl->opal_dev)
> + ctrl->opal_dev = init_opal_dev(ctrl, &nvme_sec_submit);
> + else if (was_suspended)
> + opal_unlock_from_suspend(ctrl->opal_dev);
> + } else {
> + free_opal_dev(ctrl->opal_dev);
> + ctrl->opal_dev = NULL;
> + }
> +}
> +#else
> +static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
> +{
> +}
> #endif /* CONFIG_BLK_SED_OPAL */
>
> #ifdef CONFIG_BLK_DEV_ZONED
> @@ -3242,7 +3258,7 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
> * register in our nvme_ctrl structure. This should be called as soon as
> * the admin queue is fully up and running.
> */
> -int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
> +int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl, bool was_suspended)
> {
> int ret;
>
> @@ -3273,6 +3289,8 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
> if (ret < 0)
> return ret;
>
> + nvme_configure_opal(ctrl, was_suspended);
> +
> if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) {
> /*
> * Do not return errors unless we are in a controller reset,
> @@ -5007,6 +5025,7 @@ static void nvme_free_ctrl(struct device *dev)
> nvme_auth_stop(ctrl);
> nvme_auth_free(ctrl);
> __free_page(ctrl->discard_page);
> + free_opal_dev(ctrl->opal_dev);
>
> if (subsys) {
> mutex_lock(&nvme_subsystems_lock);
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 2d3c54838496f..1f9f4075794b5 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -3107,7 +3107,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>
> nvme_start_admin_queue(&ctrl->ctrl);
>
> - ret = nvme_init_ctrl_finish(&ctrl->ctrl);
> + ret = nvme_init_ctrl_finish(&ctrl->ctrl, false);
Completely imaginary question,
Since you correctly indicated that opal is not pcie specific, why is
this passed as always false?
Wandering if a opal fabrics device comes along and becomes
incompatible with this...
Although fabrics call nvme_shutdown_ctrl only when deleting the
controller, so there is no opal_dev in any case... There is no real
suspend in fabrics at all really. So I guess it makes sense.
It's just a bit confusing to think about in the fabrics context.
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 03/13] nvme: simplify transport specific device attribute handling
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
2022-11-13 16:11 ` [PATCH 01/13] nvme: don't call nvme_init_ctrl_finish from nvme_passthru_end Christoph Hellwig
2022-11-13 16:11 ` [PATCH 02/13] nvme: move OPAL setup from PCIe to core Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 04/13] nvme-pci: put the admin queue in nvme_dev_remove_admin Christoph Hellwig
` (11 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Allow the transport driver to override the attribute groups for the
control device, so that the PCIe driver doesn't manually have to add a
group after device creation and keep track of it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/core.c | 8 ++++++--
drivers/nvme/host/nvme.h | 2 ++
drivers/nvme/host/pci.c | 23 ++++++++---------------
3 files changed, 16 insertions(+), 17 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index aedacf2fba69e..6040a13d3e2d1 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3906,10 +3906,11 @@ static umode_t nvme_dev_attrs_are_visible(struct kobject *kobj,
return a->mode;
}
-static const struct attribute_group nvme_dev_attrs_group = {
+const struct attribute_group nvme_dev_attrs_group = {
.attrs = nvme_dev_attrs,
.is_visible = nvme_dev_attrs_are_visible,
};
+EXPORT_SYMBOL_GPL(nvme_dev_attrs_group);
static const struct attribute_group *nvme_dev_attr_groups[] = {
&nvme_dev_attrs_group,
@@ -5091,7 +5092,10 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
ctrl->instance);
ctrl->device->class = nvme_class;
ctrl->device->parent = ctrl->dev;
- ctrl->device->groups = nvme_dev_attr_groups;
+ if (ops->dev_attr_groups)
+ ctrl->device->groups = ops->dev_attr_groups;
+ else
+ ctrl->device->groups = nvme_dev_attr_groups;
ctrl->device->release = nvme_free_ctrl;
dev_set_drvdata(ctrl->device, ctrl);
ret = dev_set_name(ctrl->device, "nvme%d", ctrl->instance);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 306a120d49ab9..924ff80d85f60 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -508,6 +508,7 @@ struct nvme_ctrl_ops {
unsigned int flags;
#define NVME_F_FABRICS (1 << 0)
#define NVME_F_METADATA_SUPPORTED (1 << 1)
+ const struct attribute_group **dev_attr_groups;
int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
@@ -854,6 +855,7 @@ int nvme_dev_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags);
extern const struct attribute_group *nvme_ns_id_attr_groups[];
extern const struct pr_ops nvme_pr_ops;
extern const struct block_device_operations nvme_ns_head_ops;
+extern const struct attribute_group nvme_dev_attrs_group;
struct nvme_ns *nvme_find_path(struct nvme_ns_head *head);
#ifdef CONFIG_NVME_MULTIPATH
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index e4f084e12b966..c8f6ce5eee1c2 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -158,8 +158,6 @@ struct nvme_dev {
unsigned int nr_allocated_queues;
unsigned int nr_write_queues;
unsigned int nr_poll_queues;
-
- bool attrs_added;
};
static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
@@ -2234,11 +2232,17 @@ static struct attribute *nvme_pci_attrs[] = {
NULL,
};
-static const struct attribute_group nvme_pci_attr_group = {
+static const struct attribute_group nvme_pci_dev_attrs_group = {
.attrs = nvme_pci_attrs,
.is_visible = nvme_pci_attrs_are_visible,
};
+static const struct attribute_group *nvme_pci_dev_attr_groups[] = {
+ &nvme_dev_attrs_group,
+ &nvme_pci_dev_attrs_group,
+ NULL,
+};
+
/*
* nirqs is the number of interrupts available for write and read
* queues. The core already reserved an interrupt for the admin queue.
@@ -2930,10 +2934,6 @@ static void nvme_reset_work(struct work_struct *work)
goto out;
}
- if (!dev->attrs_added && !sysfs_create_group(&dev->ctrl.device->kobj,
- &nvme_pci_attr_group))
- dev->attrs_added = true;
-
nvme_start_ctrl(&dev->ctrl);
return;
@@ -3006,6 +3006,7 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
.name = "pcie",
.module = THIS_MODULE,
.flags = NVME_F_METADATA_SUPPORTED,
+ .dev_attr_groups = nvme_pci_dev_attr_groups,
.reg_read32 = nvme_pci_reg_read32,
.reg_write32 = nvme_pci_reg_write32,
.reg_read64 = nvme_pci_reg_read64,
@@ -3204,13 +3205,6 @@ static void nvme_shutdown(struct pci_dev *pdev)
nvme_disable_prepare_reset(dev, true);
}
-static void nvme_remove_attrs(struct nvme_dev *dev)
-{
- if (dev->attrs_added)
- sysfs_remove_group(&dev->ctrl.device->kobj,
- &nvme_pci_attr_group);
-}
-
/*
* The driver's remove may be called on a device in a partially initialized
* state. This function must not have any dependencies on the device state in
@@ -3232,7 +3226,6 @@ static void nvme_remove(struct pci_dev *pdev)
nvme_stop_ctrl(&dev->ctrl);
nvme_remove_namespaces(&dev->ctrl);
nvme_dev_disable(dev, true);
- nvme_remove_attrs(dev);
nvme_free_host_mem(dev);
nvme_dev_remove_admin(dev);
nvme_free_queues(dev, 0);
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 04/13] nvme-pci: put the admin queue in nvme_dev_remove_admin
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (2 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 03/13] nvme: simplify transport specific device attribute handling Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 05/13] nvme-pci: move more teardown work to nvme_remove Christoph Hellwig
` (10 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Once the controller is shutdown no one can access the admin queue. Tear
it down in nvme_dev_remove_admin, which matches the flow in the other
drivers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/pci.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c8f6ce5eee1c2..f526ad578088a 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1747,6 +1747,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
*/
nvme_start_admin_queue(&dev->ctrl);
blk_mq_destroy_queue(dev->ctrl.admin_q);
+ blk_put_queue(dev->ctrl.admin_q);
blk_mq_free_tag_set(&dev->admin_tagset);
}
}
@@ -2774,8 +2775,6 @@ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
nvme_dbbuf_dma_free(dev);
nvme_free_tagset(dev);
- if (dev->ctrl.admin_q)
- blk_put_queue(dev->ctrl.admin_q);
mempool_destroy(dev->iod_mempool);
put_device(dev->dev);
kfree(dev->queues);
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 05/13] nvme-pci: move more teardown work to nvme_remove
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (3 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 04/13] nvme-pci: put the admin queue in nvme_dev_remove_admin Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 06/13] nvme-pci: factor the iod mempool creation into a helper Christoph Hellwig
` (9 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
nvme_dbbuf_dma_free frees dma coherent memory, so it must not be called
after ->remove has returned. Fortunately there is no way to use it
after shutdown as no more I/O is possible so it can be moved. Similarly
the iod_mempool can't be used for a device kept alive after shutdown, so
move it next to freeing the PRP pools.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/pci.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index f526ad578088a..b638f43f2df26 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2773,9 +2773,7 @@ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
{
struct nvme_dev *dev = to_nvme_dev(ctrl);
- nvme_dbbuf_dma_free(dev);
nvme_free_tagset(dev);
- mempool_destroy(dev->iod_mempool);
put_device(dev->dev);
kfree(dev->queues);
kfree(dev);
@@ -3227,7 +3225,9 @@ static void nvme_remove(struct pci_dev *pdev)
nvme_dev_disable(dev, true);
nvme_free_host_mem(dev);
nvme_dev_remove_admin(dev);
+ nvme_dbbuf_dma_free(dev);
nvme_free_queues(dev, 0);
+ mempool_destroy(dev->iod_mempool);
nvme_release_prp_pools(dev);
nvme_dev_unmap(dev);
nvme_uninit_ctrl(&dev->ctrl);
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 06/13] nvme-pci: factor the iod mempool creation into a helper
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (4 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 05/13] nvme-pci: move more teardown work to nvme_remove Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 07/13] nvme-pci: factor out a nvme_pci_alloc_dev helper Christoph Hellwig
` (8 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Add a helper to create the iod mempool.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/pci.c | 41 ++++++++++++++++++-----------------------
1 file changed, 18 insertions(+), 23 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b638f43f2df26..f7dab65bf5042 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -390,14 +390,6 @@ static int nvme_pci_npages_sgl(void)
PAGE_SIZE);
}
-static size_t nvme_pci_iod_alloc_size(void)
-{
- size_t npages = max(nvme_pci_npages_prp(), nvme_pci_npages_sgl());
-
- return sizeof(__le64 *) * npages +
- sizeof(struct scatterlist) * NVME_MAX_SEGS;
-}
-
static int nvme_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
unsigned int hctx_idx)
{
@@ -2762,6 +2754,22 @@ static void nvme_release_prp_pools(struct nvme_dev *dev)
dma_pool_destroy(dev->prp_small_pool);
}
+static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev)
+{
+ size_t npages = max(nvme_pci_npages_prp(), nvme_pci_npages_sgl());
+ size_t alloc_size = sizeof(__le64 *) * npages +
+ sizeof(struct scatterlist) * NVME_MAX_SEGS;
+
+ WARN_ON_ONCE(alloc_size > PAGE_SIZE);
+ dev->iod_mempool = mempool_create_node(1,
+ mempool_kmalloc, mempool_kfree,
+ (void *)alloc_size, GFP_KERNEL,
+ dev_to_node(dev->dev));
+ if (!dev->iod_mempool)
+ return -ENOMEM;
+ return 0;
+}
+
static void nvme_free_tagset(struct nvme_dev *dev)
{
if (dev->tagset.tags)
@@ -3087,7 +3095,6 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
int node, result = -ENOMEM;
struct nvme_dev *dev;
unsigned long quirks = id->driver_data;
- size_t alloc_size;
node = dev_to_node(&pdev->dev);
if (node == NUMA_NO_NODE)
@@ -3132,21 +3139,9 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
quirks |= NVME_QUIRK_SIMPLE_SUSPEND;
}
- /*
- * Double check that our mempool alloc size will cover the biggest
- * command we support.
- */
- alloc_size = nvme_pci_iod_alloc_size();
- WARN_ON_ONCE(alloc_size > PAGE_SIZE);
-
- dev->iod_mempool = mempool_create_node(1, mempool_kmalloc,
- mempool_kfree,
- (void *) alloc_size,
- GFP_KERNEL, node);
- if (!dev->iod_mempool) {
- result = -ENOMEM;
+ result = nvme_pci_alloc_iod_mempool(dev);
+ if (result)
goto release_pools;
- }
result = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops,
quirks);
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 07/13] nvme-pci: factor out a nvme_pci_alloc_dev helper
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (5 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 06/13] nvme-pci: factor the iod mempool creation into a helper Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 08/13] nvme-pci: set constant paramters in nvme_pci_alloc_ctrl Christoph Hellwig
` (7 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Add a helper that allocates the nvme_dev structure up to the point where
we can call nvme_init_ctrl. This pairs with the free_ctrl method and can
thus be used to cleanup the teardown path and make it more symmetric.
Note that this now calls nvme_init_ctrl a lot earlier during probing,
which also means the per-controller character device shows up earlier.
Due to the controller state no commnds can be send on it, but it might
make sense to delay the cdev registration until nvme_init_ctrl_finish.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/pci.c | 81 +++++++++++++++++++++++------------------
1 file changed, 46 insertions(+), 35 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index f7dab65bf5042..03c83cd724ec5 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2777,6 +2777,7 @@ static void nvme_free_tagset(struct nvme_dev *dev)
dev->ctrl.tagset = NULL;
}
+/* pairs with nvme_pci_alloc_dev */
static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
{
struct nvme_dev *dev = to_nvme_dev(ctrl);
@@ -3090,19 +3091,23 @@ static void nvme_async_probe(void *data, async_cookie_t cookie)
nvme_put_ctrl(&dev->ctrl);
}
-static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
+ const struct pci_device_id *id)
{
- int node, result = -ENOMEM;
- struct nvme_dev *dev;
unsigned long quirks = id->driver_data;
+ int node = dev_to_node(&pdev->dev);
+ struct nvme_dev *dev;
+ int ret = -ENOMEM;
- node = dev_to_node(&pdev->dev);
if (node == NUMA_NO_NODE)
set_dev_node(&pdev->dev, first_memory_node);
dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node);
if (!dev)
- return -ENOMEM;
+ return NULL;
+ INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
+ INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
+ mutex_init(&dev->shutdown_lock);
dev->nr_write_queues = write_queues;
dev->nr_poll_queues = poll_queues;
@@ -3110,25 +3115,11 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
dev->queues = kcalloc_node(dev->nr_allocated_queues,
sizeof(struct nvme_queue), GFP_KERNEL, node);
if (!dev->queues)
- goto free;
+ goto out_free_dev;
dev->dev = get_device(&pdev->dev);
- pci_set_drvdata(pdev, dev);
-
- result = nvme_dev_map(dev);
- if (result)
- goto put_pci;
-
- INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
- INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
- mutex_init(&dev->shutdown_lock);
-
- result = nvme_setup_prp_pools(dev);
- if (result)
- goto unmap;
quirks |= check_vendor_combination_bug(pdev);
-
if (!noacpi && acpi_storage_d3(&pdev->dev)) {
/*
* Some systems use a bios work around to ask for D3 on
@@ -3138,34 +3129,54 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
"platform quirk: setting simple suspend\n");
quirks |= NVME_QUIRK_SIMPLE_SUSPEND;
}
+ ret = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops,
+ quirks);
+ if (ret)
+ goto out_put_device;
+ return dev;
- result = nvme_pci_alloc_iod_mempool(dev);
+out_put_device:
+ put_device(dev->dev);
+ kfree(dev->queues);
+out_free_dev:
+ kfree(dev);
+ return ERR_PTR(ret);
+}
+
+static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct nvme_dev *dev;
+ int result = -ENOMEM;
+
+ dev = nvme_pci_alloc_dev(pdev, id);
+ if (!dev)
+ return -ENOMEM;
+
+ result = nvme_dev_map(dev);
if (result)
- goto release_pools;
+ goto out_uninit_ctrl;
- result = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops,
- quirks);
+ result = nvme_setup_prp_pools(dev);
+ if (result)
+ goto out_dev_unmap;
+
+ result = nvme_pci_alloc_iod_mempool(dev);
if (result)
- goto release_mempool;
+ goto out_release_prp_pools;
dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev));
+ pci_set_drvdata(pdev, dev);
nvme_reset_ctrl(&dev->ctrl);
async_schedule(nvme_async_probe, dev);
-
return 0;
- release_mempool:
- mempool_destroy(dev->iod_mempool);
- release_pools:
+out_release_prp_pools:
nvme_release_prp_pools(dev);
- unmap:
+out_dev_unmap:
nvme_dev_unmap(dev);
- put_pci:
- put_device(dev->dev);
- free:
- kfree(dev->queues);
- kfree(dev);
+out_uninit_ctrl:
+ nvme_uninit_ctrl(&dev->ctrl);
return result;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 08/13] nvme-pci: set constant paramters in nvme_pci_alloc_ctrl
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (6 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 07/13] nvme-pci: factor out a nvme_pci_alloc_dev helper Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 09/13] nvme-pci: call nvme_pci_configure_admin_queue from nvme_pci_enable Christoph Hellwig
` (6 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Move setting of low-level constant parameters from nvme_reset_work to
nvme_pci_alloc_ctrl.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/pci.c | 38 +++++++++++++++++---------------------
1 file changed, 17 insertions(+), 21 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 03c83cd724ec5..9dcb35f148009 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2841,21 +2841,6 @@ static void nvme_reset_work(struct work_struct *work)
nvme_start_admin_queue(&dev->ctrl);
}
- dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
-
- /*
- * Limit the max command size to prevent iod->sg allocations going
- * over a single page.
- */
- dev->ctrl.max_hw_sectors = min_t(u32,
- NVME_MAX_KB_SZ << 1, dma_max_mapping_size(dev->dev) >> 9);
- dev->ctrl.max_segments = NVME_MAX_SEGS;
-
- /*
- * Don't limit the IOMMU merged segment size.
- */
- dma_set_max_seg_size(dev->dev, 0xffffffff);
-
mutex_unlock(&dev->shutdown_lock);
/*
@@ -2869,12 +2854,6 @@ static void nvme_reset_work(struct work_struct *work)
goto out;
}
- /*
- * We do not support an SGL for metadata (yet), so we are limited to a
- * single integrity segment for the separate metadata pointer.
- */
- dev->ctrl.max_integrity_segments = 1;
-
result = nvme_init_ctrl_finish(&dev->ctrl, was_suspend);
if (result)
goto out;
@@ -3133,6 +3112,23 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
quirks);
if (ret)
goto out_put_device;
+
+ dma_set_min_align_mask(&pdev->dev, NVME_CTRL_PAGE_SIZE - 1);
+ dma_set_max_seg_size(&pdev->dev, 0xffffffff);
+
+ /*
+ * Limit the max command size to prevent iod->sg allocations going
+ * over a single page.
+ */
+ dev->ctrl.max_hw_sectors = min_t(u32,
+ NVME_MAX_KB_SZ << 1, dma_max_mapping_size(&pdev->dev) >> 9);
+ dev->ctrl.max_segments = NVME_MAX_SEGS;
+
+ /*
+ * There is no support for SGLs for metadata (yet), so we are limited to
+ * a single integrity segment for the separate metadata pointer.
+ */
+ dev->ctrl.max_integrity_segments = 1;
return dev;
out_put_device:
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 09/13] nvme-pci: call nvme_pci_configure_admin_queue from nvme_pci_enable
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (7 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 08/13] nvme-pci: set constant paramters in nvme_pci_alloc_ctrl Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 10/13] nvme-pci: simplify nvme_dbbuf_dma_alloc Christoph Hellwig
` (5 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
nvme_pci_configure_admin_queue is called right after nvme_pci_enable, and
it's work is undone by nvme_dev_disable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/pci.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 9dcb35f148009..c2e3a87237da8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2639,7 +2639,8 @@ static int nvme_pci_enable(struct nvme_dev *dev)
pci_enable_pcie_error_reporting(pdev);
pci_save_state(pdev);
- return 0;
+
+ return nvme_pci_configure_admin_queue(dev);
disable:
pci_disable_device(pdev);
@@ -2829,10 +2830,6 @@ static void nvme_reset_work(struct work_struct *work)
if (result)
goto out_unlock;
- result = nvme_pci_configure_admin_queue(dev);
- if (result)
- goto out_unlock;
-
if (!dev->ctrl.admin_q) {
result = nvme_pci_alloc_admin_tag_set(dev);
if (result)
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 10/13] nvme-pci: simplify nvme_dbbuf_dma_alloc
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (8 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 09/13] nvme-pci: call nvme_pci_configure_admin_queue from nvme_pci_enable Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-14 6:28 ` Sagi Grimberg
2022-11-13 16:11 ` [PATCH 11/13] nvme-pci: move the HMPRE check into nvme_setup_host_mem Christoph Hellwig
` (4 subsequent siblings)
14 siblings, 1 reply; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Move the OACS check and the error checking into nvme_dbbuf_dma_alloc so
that an upcoming second caller doesn't have to duplicate this boilerplate
code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/nvme/host/pci.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c2e3a87237da8..4da339690ec67 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -239,10 +239,13 @@ static inline unsigned int nvme_dbbuf_size(struct nvme_dev *dev)
return dev->nr_allocated_queues * 8 * dev->db_stride;
}
-static int nvme_dbbuf_dma_alloc(struct nvme_dev *dev)
+static void nvme_dbbuf_dma_alloc(struct nvme_dev *dev)
{
unsigned int mem_size = nvme_dbbuf_size(dev);
+ if (!(dev->ctrl.oacs & NVME_CTRL_OACS_DBBUF_SUPP))
+ return;
+
if (dev->dbbuf_dbs) {
/*
* Clear the dbbuf memory so the driver doesn't observe stale
@@ -250,25 +253,27 @@ static int nvme_dbbuf_dma_alloc(struct nvme_dev *dev)
*/
memset(dev->dbbuf_dbs, 0, mem_size);
memset(dev->dbbuf_eis, 0, mem_size);
- return 0;
+ return;
}
dev->dbbuf_dbs = dma_alloc_coherent(dev->dev, mem_size,
&dev->dbbuf_dbs_dma_addr,
GFP_KERNEL);
if (!dev->dbbuf_dbs)
- return -ENOMEM;
+ goto fail;
dev->dbbuf_eis = dma_alloc_coherent(dev->dev, mem_size,
&dev->dbbuf_eis_dma_addr,
GFP_KERNEL);
- if (!dev->dbbuf_eis) {
- dma_free_coherent(dev->dev, mem_size,
- dev->dbbuf_dbs, dev->dbbuf_dbs_dma_addr);
- dev->dbbuf_dbs = NULL;
- return -ENOMEM;
- }
+ if (!dev->dbbuf_eis)
+ goto fail_free_dbbuf_dbs;
+ return;
- return 0;
+fail_free_dbbuf_dbs:
+ dma_free_coherent(dev->dev, mem_size, dev->dbbuf_dbs,
+ dev->dbbuf_dbs_dma_addr);
+ dev->dbbuf_dbs = NULL;
+fail:
+ dev_warn(dev->dev, "unable to allocate dma for dbbuf\n");
}
static void nvme_dbbuf_dma_free(struct nvme_dev *dev)
@@ -2855,12 +2860,7 @@ static void nvme_reset_work(struct work_struct *work)
if (result)
goto out;
- if (dev->ctrl.oacs & NVME_CTRL_OACS_DBBUF_SUPP) {
- result = nvme_dbbuf_dma_alloc(dev);
- if (result)
- dev_warn(dev->dev,
- "unable to allocate dma for dbbuf\n");
- }
+ nvme_dbbuf_dma_alloc(dev);
if (dev->ctrl.hmpre) {
result = nvme_setup_host_mem(dev);
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* Re: [PATCH 10/13] nvme-pci: simplify nvme_dbbuf_dma_alloc
2022-11-13 16:11 ` [PATCH 10/13] nvme-pci: simplify nvme_dbbuf_dma_alloc Christoph Hellwig
@ 2022-11-14 6:28 ` Sagi Grimberg
0 siblings, 0 replies; 22+ messages in thread
From: Sagi Grimberg @ 2022-11-14 6:28 UTC (permalink / raw)
To: Christoph Hellwig, Keith Busch
Cc: Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
> Move the OACS check and the error checking into nvme_dbbuf_dma_alloc so
> that an upcoming second caller doesn't have to duplicate this boilerplate
> code.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/nvme/host/pci.c | 32 ++++++++++++++++----------------
> 1 file changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index c2e3a87237da8..4da339690ec67 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -239,10 +239,13 @@ static inline unsigned int nvme_dbbuf_size(struct nvme_dev *dev)
> return dev->nr_allocated_queues * 8 * dev->db_stride;
> }
>
> -static int nvme_dbbuf_dma_alloc(struct nvme_dev *dev)
> +static void nvme_dbbuf_dma_alloc(struct nvme_dev *dev)
> {
> unsigned int mem_size = nvme_dbbuf_size(dev);
>
> + if (!(dev->ctrl.oacs & NVME_CTRL_OACS_DBBUF_SUPP))
> + return;
> +
I usually dislike functions that may or may not operate based on
caps check inside them. Even if there are more than one call-site.
But that is a personal taste.
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 11/13] nvme-pci: move the HMPRE check into nvme_setup_host_mem
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (9 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 10/13] nvme-pci: simplify nvme_dbbuf_dma_alloc Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-14 6:29 ` Sagi Grimberg
2022-11-13 16:11 ` [PATCH 12/13] nvme-pci: split the initial probe from the rest path Christoph Hellwig
` (3 subsequent siblings)
14 siblings, 1 reply; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Check that a HMB is wanted into the allocation helper instead of the
caller. This makes life simpler for an upcoming second caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/nvme/host/pci.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 4da339690ec67..54e16cf1590c3 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2102,6 +2102,9 @@ static int nvme_setup_host_mem(struct nvme_dev *dev)
u32 enable_bits = NVME_HOST_MEM_ENABLE;
int ret;
+ if (dev->ctrl.hmpre)
+ return 0;
+
preferred = min(preferred, max);
if (min > max) {
dev_warn(dev->ctrl.device,
@@ -2862,11 +2865,9 @@ static void nvme_reset_work(struct work_struct *work)
nvme_dbbuf_dma_alloc(dev);
- if (dev->ctrl.hmpre) {
- result = nvme_setup_host_mem(dev);
- if (result < 0)
- goto out;
- }
+ result = nvme_setup_host_mem(dev);
+ if (result < 0)
+ goto out;
result = nvme_setup_io_queues(dev);
if (result)
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 12/13] nvme-pci: split the initial probe from the rest path
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (10 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 11/13] nvme-pci: move the HMPRE check into nvme_setup_host_mem Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-13 16:11 ` [PATCH 13/13] nvme-pci: don't unbind the driver on reset failure Christoph Hellwig
` (2 subsequent siblings)
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
nvme_reset_work is a little fragile as it needs to handle both resetting
a live controller and initializing one during probe. Split out the initial
probe and open code it in nvme_probe and leave nvme_reset_work to just do
the live controller reset.
This fixes a recently introduced bug where nvme_dev_disable causes a NULL
pointer dereferences in blk_mq_quiesce_tagset because the tagset pointer
is not set when the reset state is entered directly from the new state.
The separate probe code can skip the reset state and probe directly and
fixes this.
To make sure the system isn't single threaded on enabling nvme
controllers, set the PROBE_PREFER_ASYNCHRONOUS flag in the device_driver
structure so that the driver core probes in parallel.
Fixes: 98d81f0df70c ("nvme: use blk_mq_[un]quiesce_tagset")
Reported-by: Gerd Bayer <gbayer@linux.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Tested-by Gerd Bayer <gbayer@linxu.ibm.com>
---
drivers/nvme/host/pci.c | 133 ++++++++++++++++++++++++----------------
1 file changed, 80 insertions(+), 53 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 54e16cf1590c3..6a5b661084509 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2837,15 +2837,7 @@ static void nvme_reset_work(struct work_struct *work)
result = nvme_pci_enable(dev);
if (result)
goto out_unlock;
-
- if (!dev->ctrl.admin_q) {
- result = nvme_pci_alloc_admin_tag_set(dev);
- if (result)
- goto out_unlock;
- } else {
- nvme_start_admin_queue(&dev->ctrl);
- }
-
+ nvme_start_admin_queue(&dev->ctrl);
mutex_unlock(&dev->shutdown_lock);
/*
@@ -2873,37 +2865,23 @@ static void nvme_reset_work(struct work_struct *work)
if (result)
goto out;
- if (dev->ctrl.tagset) {
- /*
- * This is a controller reset and we already have a tagset.
- * Freeze and update the number of I/O queues as thos might have
- * changed. If there are no I/O queues left after this reset,
- * keep the controller around but remove all namespaces.
- */
- if (dev->online_queues > 1) {
- nvme_start_queues(&dev->ctrl);
- nvme_wait_freeze(&dev->ctrl);
- nvme_pci_update_nr_queues(dev);
- nvme_dbbuf_set(dev);
- nvme_unfreeze(&dev->ctrl);
- } else {
- dev_warn(dev->ctrl.device, "IO queues lost\n");
- nvme_mark_namespaces_dead(&dev->ctrl);
- nvme_start_queues(&dev->ctrl);
- nvme_remove_namespaces(&dev->ctrl);
- nvme_free_tagset(dev);
- }
+ /*
+ * Freeze and update the number of I/O queues as thos might have
+ * changed. If there are no I/O queues left after this reset, keep the
+ * controller around but remove all namespaces.
+ */
+ if (dev->online_queues > 1) {
+ nvme_start_queues(&dev->ctrl);
+ nvme_wait_freeze(&dev->ctrl);
+ nvme_pci_update_nr_queues(dev);
+ nvme_dbbuf_set(dev);
+ nvme_unfreeze(&dev->ctrl);
} else {
- /*
- * First probe. Still allow the controller to show up even if
- * there are no namespaces.
- */
- if (dev->online_queues > 1) {
- nvme_pci_alloc_tag_set(dev);
- nvme_dbbuf_set(dev);
- } else {
- dev_warn(dev->ctrl.device, "IO queues not created\n");
- }
+ dev_warn(dev->ctrl.device, "IO queues lost\n");
+ nvme_mark_namespaces_dead(&dev->ctrl);
+ nvme_start_queues(&dev->ctrl);
+ nvme_remove_namespaces(&dev->ctrl);
+ nvme_free_tagset(dev);
}
/*
@@ -3059,15 +3037,6 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
return 0;
}
-static void nvme_async_probe(void *data, async_cookie_t cookie)
-{
- struct nvme_dev *dev = data;
-
- flush_work(&dev->ctrl.reset_work);
- flush_work(&dev->ctrl.scan_work);
- nvme_put_ctrl(&dev->ctrl);
-}
-
static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
const struct pci_device_id *id)
{
@@ -3159,12 +3128,69 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto out_release_prp_pools;
dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev));
+
+ result = nvme_pci_enable(dev);
+ if (result)
+ goto out_release_iod_mempool;
+
+ result = nvme_pci_alloc_admin_tag_set(dev);
+ if (result)
+ goto out_disable;
+
+ /*
+ * Mark the controller as connecting before sending admin commands to
+ * allow the timeout handler to do the right thing.
+ */
+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_CONNECTING)) {
+ dev_warn(dev->ctrl.device,
+ "failed to mark controller CONNECTING\n");
+ result = -EBUSY;
+ goto out_disable;
+ }
+
+ result = nvme_init_ctrl_finish(&dev->ctrl, false);
+ if (result)
+ goto out_disable;
+
+ nvme_dbbuf_dma_alloc(dev);
+
+ result = nvme_setup_host_mem(dev);
+ if (result < 0)
+ goto out_disable;
+
+ result = nvme_setup_io_queues(dev);
+ if (result)
+ goto out_disable;
+
+ if (dev->online_queues > 1) {
+ nvme_pci_alloc_tag_set(dev);
+ nvme_dbbuf_set(dev);
+ } else {
+ dev_warn(dev->ctrl.device, "IO queues not created\n");
+ }
+
+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_LIVE)) {
+ dev_warn(dev->ctrl.device,
+ "failed to mark controller live state\n");
+ result = -ENODEV;
+ goto out_disable;
+ }
+
pci_set_drvdata(pdev, dev);
- nvme_reset_ctrl(&dev->ctrl);
- async_schedule(nvme_async_probe, dev);
+ nvme_start_ctrl(&dev->ctrl);
+ nvme_put_ctrl(&dev->ctrl);
return 0;
+out_disable:
+ nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);
+ nvme_dev_disable(dev, true);
+ nvme_free_host_mem(dev);
+ nvme_dev_remove_admin(dev);
+ nvme_dbbuf_dma_free(dev);
+ nvme_free_queues(dev, 0);
+out_release_iod_mempool:
+ mempool_destroy(dev->iod_mempool);
out_release_prp_pools:
nvme_release_prp_pools(dev);
out_dev_unmap:
@@ -3560,11 +3586,12 @@ static struct pci_driver nvme_driver = {
.probe = nvme_probe,
.remove = nvme_remove,
.shutdown = nvme_shutdown,
-#ifdef CONFIG_PM_SLEEP
.driver = {
- .pm = &nvme_dev_pm_ops,
- },
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
+#ifdef CONFIG_PM_SLEEP
+ .pm = &nvme_dev_pm_ops,
#endif
+ },
.sriov_configure = pci_sriov_configure_simple,
.err_handler = &nvme_err_handler,
};
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 13/13] nvme-pci: don't unbind the driver on reset failure
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (11 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 12/13] nvme-pci: split the initial probe from the rest path Christoph Hellwig
@ 2022-11-13 16:11 ` Christoph Hellwig
2022-11-14 6:32 ` Sagi Grimberg
2022-11-14 4:00 ` nvme-pci: split the probe and reset handlers v2 Chaitanya Kulkarni
2022-11-15 9:57 ` Christoph Hellwig
14 siblings, 1 reply; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-13 16:11 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
Unbind a device driver when a reset fails is very unusual behavior.
Just shut the controller down and leave it in dead state if we fail
to reset it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/nvme/host/pci.c | 40 ++++++++++------------------------------
1 file changed, 10 insertions(+), 30 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6a5b661084509..fe21c9c153128 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -130,7 +130,6 @@ struct nvme_dev {
u32 db_stride;
void __iomem *bar;
unsigned long bar_mapped_size;
- struct work_struct remove_work;
struct mutex shutdown_lock;
bool subsystem;
u64 cmb_size;
@@ -2797,20 +2796,6 @@ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
kfree(dev);
}
-static void nvme_remove_dead_ctrl(struct nvme_dev *dev)
-{
- /*
- * Set state to deleting now to avoid blocking nvme_wait_reset(), which
- * may be holding this pci_dev's device lock.
- */
- nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);
- nvme_get_ctrl(&dev->ctrl);
- nvme_dev_disable(dev, false);
- nvme_mark_namespaces_dead(&dev->ctrl);
- if (!queue_work(nvme_wq, &dev->remove_work))
- nvme_put_ctrl(&dev->ctrl);
-}
-
static void nvme_reset_work(struct work_struct *work)
{
struct nvme_dev *dev =
@@ -2901,20 +2886,16 @@ static void nvme_reset_work(struct work_struct *work)
out_unlock:
mutex_unlock(&dev->shutdown_lock);
out:
- if (result)
- dev_warn(dev->ctrl.device,
- "Removing after probe failure status: %d\n", result);
- nvme_remove_dead_ctrl(dev);
-}
-
-static void nvme_remove_dead_ctrl_work(struct work_struct *work)
-{
- struct nvme_dev *dev = container_of(work, struct nvme_dev, remove_work);
- struct pci_dev *pdev = to_pci_dev(dev->dev);
-
- if (pci_get_drvdata(pdev))
- device_release_driver(&pdev->dev);
- nvme_put_ctrl(&dev->ctrl);
+ /*
+ * Set state to deleting now to avoid blocking nvme_wait_reset(), which
+ * may be holding this pci_dev's device lock.
+ */
+ dev_warn(dev->ctrl.device, "Disabling device after reset failure: %d\n",
+ result);
+ nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);
+ nvme_dev_disable(dev, true);
+ nvme_mark_namespaces_dead(&dev->ctrl);
+ nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD);
}
static int nvme_pci_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
@@ -3052,7 +3033,6 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
if (!dev)
return NULL;
INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
- INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
mutex_init(&dev->shutdown_lock);
dev->nr_write_queues = write_queues;
--
2.30.2
^ permalink raw reply related [flat|nested] 22+ messages in thread* Re: nvme-pci: split the probe and reset handlers v2
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (12 preceding siblings ...)
2022-11-13 16:11 ` [PATCH 13/13] nvme-pci: don't unbind the driver on reset failure Christoph Hellwig
@ 2022-11-14 4:00 ` Chaitanya Kulkarni
2022-11-15 9:57 ` Christoph Hellwig
14 siblings, 0 replies; 22+ messages in thread
From: Chaitanya Kulkarni @ 2022-11-14 4:00 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Keith Busch, Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer,
asahi@lists.linux.dev, linux-nvme@lists.infradead.org
>
> On Nov 13, 2022, at 8:11 AM, Christoph Hellwig <hch@lst.de> wrote:
>
> Hi all,
>
> this series split the nvme-pci probe handler to be separate from the reset
> handler. I've been wanting to do that for a while, but the bug report from
> Gerd that was caused by confusing about the controller state in the reset
> state required it to be expedited.
>
> Changes since v1:
> - switch back from IS_ENABLED for the SED_OPAL code to prevent a warning
> when it is disabled.
> - rename nvme_pci_alloc_ctrl to nvme_pci_alloc_dev
> - allow initializating shadow doorbell buffers during reset
> - simplify HMB setup a bit
> - shutdown the controller on reset failure
>
> Diffstat:
> host/apple.c | 2
> host/core.c | 39 ++++-
> host/fc.c | 2
> host/nvme.h | 7
> host/pci.c | 413 ++++++++++++++++++++++++++++------------------------------
> host/rdma.c | 2
> host/tcp.c | 2
> target/loop.c | 2
> 8 files changed, 240 insertions(+), 229 deletions(-)
This looks good to me minus the apple and fc part...
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: nvme-pci: split the probe and reset handlers v2
2022-11-13 16:11 nvme-pci: split the probe and reset handlers v2 Christoph Hellwig
` (13 preceding siblings ...)
2022-11-14 4:00 ` nvme-pci: split the probe and reset handlers v2 Chaitanya Kulkarni
@ 2022-11-15 9:57 ` Christoph Hellwig
14 siblings, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2022-11-15 9:57 UTC (permalink / raw)
To: Keith Busch
Cc: Sagi Grimberg, Chaitanya Kulkarni, Gerd Bayer, asahi, linux-nvme
I've pulled this into nvme-6.2 now.
^ permalink raw reply [flat|nested] 22+ messages in thread