* [RFC 0/8] nvmet: Add support for multi-tenant configfs
@ 2016-06-07 6:36 Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 1/8] nvmet: Add nvmet_fabric_ops get/put transport helpers Nicholas A. Bellinger
` (7 more replies)
0 siblings, 8 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
Hi folks,
Here's the first pass of a nvmet multi-tenant configfs layout,
following what we've learned in target_core_fabric_configfs.c
wrt to independent operation of storage endpoints.
Here is how the running RFC-v1 code currently looks:
/sys/kernel/config/nvmet/subsystems/
└── nqn.2003-01.org.linux-iscsi.NVMf.skylake-ep
├── namespaces
│ └── 1
│ └── ramdisk0 -> ../../../../../target/core/rd_mcp_1/ramdisk0
└── ports
└── loop
├── addr_adrfam
├── addr_portid
├── addr_traddr
├── addr_treq
├── addr_trsvcid
├── addr_trtype
└── enable
Namely, it allows existing /sys/kernel/config/target/core/ backends
to be configfs symlinked into ../nvmet/subsystems/$SUBSYS_NQN/
as nvme namespaces.
The series exposes T10-PI from /sys/kernel/config/target/core/ as
ID_NS.ms + ID_NS.dps feature bits, and enables block integrity
support with nvme/loop driver.
Note this series depends upon the following prerequisites of
target-core:
http://marc.info/?l=linux-scsi&m=146527281416606&w=2
and of course, today's earlier release of nvmet + friends:
http://lists.infradead.org/pipermail/linux-nvme/2016-June/004754.html
Note the full set of patches is available from:
https://git.kernel.org/cgit/linux/kernel/git/nab/target-pending.git/log/?h=nvmet-configfs-ng
Comments..?
--nab
Nicholas Bellinger (8):
nvmet: Add nvmet_fabric_ops get/put transport helpers
nvmet: Add support for configfs-ng multi-tenant logic
nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable
nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops
nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache backend ops
nvmet/io-cmd: Hookup sbc_ops->execute_unmap backend ops
nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits
nvme/loop: Add support for bio integrity handling
drivers/nvme/target/Makefile | 2 +-
drivers/nvme/target/admin-cmd.c | 17 ++
drivers/nvme/target/configfs-ng.c | 585 ++++++++++++++++++++++++++++++++++++++
drivers/nvme/target/configfs.c | 5 +-
drivers/nvme/target/core.c | 83 +++---
drivers/nvme/target/io-cmd.c | 169 ++++++-----
drivers/nvme/target/loop.c | 19 ++
drivers/nvme/target/nvmet.h | 27 +-
8 files changed, 799 insertions(+), 108 deletions(-)
create mode 100644 drivers/nvme/target/configfs-ng.c
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC 1/8] nvmet: Add nvmet_fabric_ops get/put transport helpers
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 2/8] nvmet: Add support for configfs-ng multi-tenant logic Nicholas A. Bellinger
` (6 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch introduces two helpers for obtaining + releasing
nvmet_fabric_ops for nvmet_port usage, and the associated
struct module ops->owner reference.
This is required in order to support nvmet/configfs-ng
and multiple nvmet_port configfs groups living under
/sys/kernel/config/nvmet/subsystems/$SUBSYS_NQN/ports/
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/core.c | 31 +++++++++++++++++++++++++++++++
drivers/nvme/target/nvmet.h | 3 +++
2 files changed, 34 insertions(+)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index e0b3f01..9af813c 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -191,6 +191,37 @@ void nvmet_disable_port(struct nvmet_port *port)
module_put(ops->owner);
}
+struct nvmet_fabrics_ops *nvmet_get_transport(struct nvmet_port *port)
+{
+ struct nvmet_fabrics_ops *ops;
+
+ down_write(&nvmet_config_sem);
+ ops = nvmet_transports[port->disc_addr.trtype];
+ if (!ops) {
+ pr_err("transport type %d not supported\n",
+ port->disc_addr.trtype);
+ return ERR_PTR(-EINVAL);
+ }
+
+ if (!try_module_get(ops->owner)) {
+ up_write(&nvmet_config_sem);
+ return ERR_PTR(-EINVAL);
+ }
+ up_write(&nvmet_config_sem);
+
+ return ops;
+}
+
+void nvmet_put_transport(struct nvmet_port *port)
+{
+ struct nvmet_fabrics_ops *ops;
+
+ down_write(&nvmet_config_sem);
+ ops = nvmet_transports[port->disc_addr.trtype];
+ module_put(ops->owner);
+ up_write(&nvmet_config_sem);
+}
+
static void nvmet_keep_alive_timer(struct work_struct *work)
{
struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work),
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 57dd6d8..2bf15088b 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -299,6 +299,9 @@ void nvmet_unregister_transport(struct nvmet_fabrics_ops *ops);
int nvmet_enable_port(struct nvmet_port *port);
void nvmet_disable_port(struct nvmet_port *port);
+struct nvmet_fabrics_ops *nvmet_get_transport(struct nvmet_port *port);
+void nvmet_put_transport(struct nvmet_port *port);
+
void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port);
void nvmet_referral_disable(struct nvmet_port *port);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 2/8] nvmet: Add support for configfs-ng multi-tenant logic
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 1/8] nvmet: Add nvmet_fabric_ops get/put transport helpers Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 3/8] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable Nicholas A. Bellinger
` (5 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch introduces support for configfs-ng, that allows
for multi-tenant /sys/kernel/config/nvmet/subsystems/$SUBSYS_NQN/
operation, using existing /sys/kernel/config/target/core/
backends from target-core to be configfs symlinked as
per nvme-target subsystem NQN namespaces.
Here's how the layout looks:
/sys/kernel/config/nvmet/
└── subsystems
└── nqn.2003-01.org.linux-iscsi.NVMf.skylake-ep
├── namespaces
│ └── 1
│ └── ramdisk0 -> ../../../../../target/core/rd_mcp_1/ramdisk0
└── ports
└── loop
├── addr_adrfam
├── addr_portid
├── addr_traddr
├── addr_treq
├── addr_trsvcid
├── addr_trtype
└── enable
Also convert nvmet_find_get_subsys to port->nf_subsys, and
do the same for nvmet_host_discovery_allowed.
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/Makefile | 2 +-
drivers/nvme/target/configfs-ng.c | 586 ++++++++++++++++++++++++++++++++++++++
drivers/nvme/target/configfs.c | 5 +-
drivers/nvme/target/core.c | 22 +-
drivers/nvme/target/nvmet.h | 11 +
5 files changed, 608 insertions(+), 18 deletions(-)
create mode 100644 drivers/nvme/target/configfs-ng.c
diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
index b7a0623..2799e07 100644
--- a/drivers/nvme/target/Makefile
+++ b/drivers/nvme/target/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_NVME_TARGET) += nvmet.o
obj-$(CONFIG_NVME_TARGET_LOOP) += nvme-loop.o
obj-$(CONFIG_NVME_TARGET_RDMA) += nvmet-rdma.o
-nvmet-y += core.o configfs.o admin-cmd.o io-cmd.o fabrics-cmd.o \
+nvmet-y += core.o configfs-ng.o admin-cmd.o io-cmd.o fabrics-cmd.o \
discovery.o
nvme-loop-y += loop.o
nvmet-rdma-y += rdma.o
diff --git a/drivers/nvme/target/configfs-ng.c b/drivers/nvme/target/configfs-ng.c
new file mode 100644
index 0000000..d495017
--- /dev/null
+++ b/drivers/nvme/target/configfs-ng.c
@@ -0,0 +1,586 @@
+/*
+ * Based on target_core_fabric_configfs.c code
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/stat.h>
+#include <linux/ctype.h>
+#include <target/target_core_base.h>
+#include <target/target_core_backend.h>
+
+#include "nvmet.h"
+
+/*
+ * nvmet_port Generic ConfigFS definitions.
+ */
+static ssize_t nvmet_port_addr_adrfam_show(struct config_item *item,
+ char *page)
+{
+ switch (to_nvmet_port(item)->disc_addr.adrfam) {
+ case NVMF_ADDR_FAMILY_IP4:
+ return sprintf(page, "ipv4\n");
+ case NVMF_ADDR_FAMILY_IP6:
+ return sprintf(page, "ipv6\n");
+ case NVMF_ADDR_FAMILY_IB:
+ return sprintf(page, "ib\n");
+ default:
+ return sprintf(page, "\n");
+ }
+}
+
+static ssize_t nvmet_port_addr_adrfam_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ if (port->enabled) {
+ pr_err("Cannot modify address while enabled\n");
+ pr_err("Disable the address before modifying\n");
+ return -EACCES;
+ }
+
+ if (sysfs_streq(page, "ipv4")) {
+ port->disc_addr.adrfam = NVMF_ADDR_FAMILY_IP4;
+ } else if (sysfs_streq(page, "ipv6")) {
+ port->disc_addr.adrfam = NVMF_ADDR_FAMILY_IP6;
+ } else if (sysfs_streq(page, "ib")) {
+ port->disc_addr.adrfam = NVMF_ADDR_FAMILY_IB;
+ } else {
+ pr_err("Invalid value '%s' for adrfam\n", page);
+ return -EINVAL;
+ }
+
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_adrfam);
+
+static ssize_t nvmet_port_addr_portid_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return snprintf(page, PAGE_SIZE, "%d\n",
+ le16_to_cpu(port->disc_addr.portid));
+}
+
+static ssize_t nvmet_port_addr_portid_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+ u16 portid = 0;
+
+ if (kstrtou16(page, 0, &portid)) {
+ pr_err("Invalid value '%s' for portid\n", page);
+ return -EINVAL;
+ }
+
+ if (port->enabled) {
+ pr_err("Cannot modify address while enabled\n");
+ pr_err("Disable the address before modifying\n");
+ return -EACCES;
+ }
+ port->disc_addr.portid = cpu_to_le16(portid);
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_portid);
+
+static ssize_t nvmet_port_addr_traddr_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return snprintf(page, PAGE_SIZE, "%s\n",
+ port->disc_addr.traddr);
+}
+
+static ssize_t nvmet_port_addr_traddr_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ if (count > NVMF_TRADDR_SIZE) {
+ pr_err("Invalid value '%s' for traddr\n", page);
+ return -EINVAL;
+ }
+
+ if (port->enabled) {
+ pr_err("Cannot modify address while enabled\n");
+ pr_err("Disable the address before modifying\n");
+ return -EACCES;
+ }
+ return snprintf(port->disc_addr.traddr,
+ sizeof(port->disc_addr.traddr), "%s", page);
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_traddr);
+
+static ssize_t nvmet_port_addr_treq_show(struct config_item *item,
+ char *page)
+{
+ switch (to_nvmet_port(item)->disc_addr.treq) {
+ case NVMF_TREQ_NOT_SPECIFIED:
+ return sprintf(page, "not specified\n");
+ case NVMF_TREQ_REQUIRED:
+ return sprintf(page, "required\n");
+ case NVMF_TREQ_NOT_REQUIRED:
+ return sprintf(page, "not required\n");
+ default:
+ return sprintf(page, "\n");
+ }
+}
+
+static ssize_t nvmet_port_addr_treq_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ if (port->enabled) {
+ pr_err("Cannot modify address while enabled\n");
+ pr_err("Disable the address before modifying\n");
+ return -EACCES;
+ }
+
+ if (sysfs_streq(page, "not specified")) {
+ port->disc_addr.treq = NVMF_TREQ_NOT_SPECIFIED;
+ } else if (sysfs_streq(page, "required")) {
+ port->disc_addr.treq = NVMF_TREQ_REQUIRED;
+ } else if (sysfs_streq(page, "not required")) {
+ port->disc_addr.treq = NVMF_TREQ_NOT_REQUIRED;
+ } else {
+ pr_err("Invalid value '%s' for treq\n", page);
+ return -EINVAL;
+ }
+
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_treq);
+
+static ssize_t nvmet_port_addr_trsvcid_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return snprintf(page, PAGE_SIZE, "%s\n",
+ port->disc_addr.trsvcid);
+}
+
+static ssize_t nvmet_port_addr_trsvcid_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ if (count > NVMF_TRSVCID_SIZE) {
+ pr_err("Invalid value '%s' for trsvcid\n", page);
+ return -EINVAL;
+ }
+ if (port->enabled) {
+ pr_err("Cannot modify address while enabled\n");
+ pr_err("Disable the address before modifying\n");
+ return -EACCES;
+ }
+ return snprintf(port->disc_addr.trsvcid,
+ sizeof(port->disc_addr.trsvcid), "%s", page);
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_trsvcid);
+
+static ssize_t nvmet_port_addr_trtype_show(struct config_item *item,
+ char *page)
+{
+ switch (to_nvmet_port(item)->disc_addr.trtype) {
+ case NVMF_TRTYPE_RDMA:
+ return sprintf(page, "rdma\n");
+ case NVMF_TRTYPE_LOOP:
+ return sprintf(page, "loop\n");
+ default:
+ return sprintf(page, "\n");
+ }
+}
+
+static void nvmet_port_init_tsas_rdma(struct nvmet_port *port)
+{
+ port->disc_addr.trtype = NVMF_TRTYPE_RDMA;
+ memset(&port->disc_addr.tsas.rdma, 0, NVMF_TSAS_SIZE);
+ port->disc_addr.tsas.rdma.qptype = NVMF_RDMA_QPTYPE_CONNECTED;
+ port->disc_addr.tsas.rdma.prtype = NVMF_RDMA_PRTYPE_NOT_SPECIFIED;
+ port->disc_addr.tsas.rdma.cms = NVMF_RDMA_CMS_RDMA_CM;
+}
+
+static void nvmet_port_init_tsas_loop(struct nvmet_port *port)
+{
+ port->disc_addr.trtype = NVMF_TRTYPE_LOOP;
+ memset(&port->disc_addr.tsas, 0, NVMF_TSAS_SIZE);
+}
+
+static ssize_t nvmet_port_addr_trtype_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ if (port->enabled) {
+ pr_err("Cannot modify address while enabled\n");
+ pr_err("Disable the address before modifying\n");
+ return -EACCES;
+ }
+
+ if (sysfs_streq(page, "rdma")) {
+ nvmet_port_init_tsas_rdma(port);
+ } else if (sysfs_streq(page, "loop")) {
+ nvmet_port_init_tsas_loop(port);
+ } else {
+ pr_err("Invalid value '%s' for trtype\n", page);
+ return -EINVAL;
+ }
+
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_trtype);
+
+static void nvmet_port_disable(struct nvmet_port *port)
+{
+ struct nvmet_fabrics_ops *ops = port->nf_ops;
+
+ if (!ops)
+ return;
+
+ ops->remove_port(port);
+ nvmet_put_transport(port);
+ port->nf_ops = NULL;
+}
+
+static ssize_t nvmet_port_enable_show(struct config_item *item, char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return sprintf(page, "%d\n", port->enabled);
+}
+
+static ssize_t nvmet_port_enable_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+ struct nvmet_fabrics_ops *ops;
+ bool enable;
+ int rc;
+
+ printk("Entering port enable %d\n", port->disc_addr.trtype);
+
+ if (strtobool(page, &enable))
+ return -EINVAL;
+
+ if (enable) {
+ ops = nvmet_get_transport(port);
+ if (IS_ERR(ops))
+ return PTR_ERR(ops);
+
+ port->nf_ops = ops;
+
+ rc = ops->add_port(port);
+ if (rc) {
+ nvmet_put_transport(port);
+ return rc;
+ }
+ port->enabled = true;
+ } else {
+ if (!port->nf_ops)
+ return -EINVAL;
+
+ nvmet_port_disable(port);
+ }
+
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, enable);
+
+static struct configfs_attribute *nvmet_port_attrs[] = {
+ &nvmet_port_attr_addr_adrfam,
+ &nvmet_port_attr_addr_portid,
+ &nvmet_port_attr_addr_traddr,
+ &nvmet_port_attr_addr_treq,
+ &nvmet_port_attr_addr_trsvcid,
+ &nvmet_port_attr_addr_trtype,
+ &nvmet_port_attr_enable,
+ NULL,
+};
+
+/*
+ * NVMf transport port CIT
+ */
+static void nvmet_port_release(struct config_item *item)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ nvmet_port_disable(port);
+ kfree(port);
+}
+
+static struct configfs_item_operations nvmet_port_item_ops = {
+ .release = nvmet_port_release,
+};
+
+static struct config_item_type nvmet_port_type = {
+ .ct_item_ops = &nvmet_port_item_ops,
+ .ct_attrs = nvmet_port_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group *nvmet_make_ports(struct config_group *group,
+ const char *name)
+{
+ struct nvmet_subsys *subsys = ports_to_subsys(&group->cg_item);
+ struct nvmet_port *port;
+
+ printk("Entering nvmet_make_port %s >>>>>>>>>>>>>>>>>>\n", name);
+
+ port = kzalloc(sizeof(*port), GFP_KERNEL);
+ if (!port)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&port->entry);
+ port->nf_subsys = subsys;
+
+ config_group_init_type_name(&port->group, name, &nvmet_port_type);
+
+ return &port->group;
+}
+
+static void nvmet_drop_ports(struct config_group *group, struct config_item *item)
+{
+ config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_ports_group_ops = {
+ .make_group = nvmet_make_ports,
+ .drop_item = nvmet_drop_ports,
+};
+
+static struct config_item_type nvmet_ports_type = {
+ .ct_group_ops = &nvmet_ports_group_ops,
+ .ct_item_ops = NULL,
+ .ct_attrs = NULL,
+ .ct_owner = THIS_MODULE,
+};
+
+/*
+ * NVMf namespace <-> /sys/kernel/config/target/core/ backend configfs symlink
+ */
+static int nvmet_ns_link(struct config_item *ns_ci, struct config_item *dev_ci)
+{
+ struct nvmet_ns *ns = to_nvmet_ns(ns_ci);
+ struct se_device *dev =
+ container_of(to_config_group(dev_ci), struct se_device, dev_group);
+
+ if (dev->dev_link_magic != SE_DEV_LINK_MAGIC) {
+ pr_err("Bad dev->dev_link_magic, not a valid se_dev_ci pointer:"
+ " %p to struct se_device: %p\n", dev_ci, dev);
+ return -EFAULT;
+ }
+
+ if (!(dev->dev_flags & DF_CONFIGURED)) {
+ pr_err("se_device not configured yet, cannot namespace link\n");
+ return -ENODEV;
+ }
+
+ if (!dev->transport->sbc_ops) {
+ pr_err("se_device does not have sbc_ops, cannot namespace link\n");
+ return -ENOSYS;
+ }
+
+ // XXX: Pass in struct se_device into nvmet_ns_enable
+ return nvmet_ns_enable(ns);
+}
+
+static int nvmet_ns_unlink(struct config_item *ns_ci, struct config_item *dev_ci)
+{
+ struct nvmet_ns *ns = to_nvmet_ns(ns_ci);
+
+ nvmet_ns_disable(ns);
+ return 0;
+}
+
+static void nvmet_ns_release(struct config_item *item)
+{
+ struct nvmet_ns *ns = to_nvmet_ns(item);
+
+ nvmet_ns_free(ns);
+}
+
+static struct configfs_item_operations nvmet_ns_item_ops = {
+ .release = nvmet_ns_release,
+ .allow_link = nvmet_ns_link,
+ .drop_link = nvmet_ns_unlink,
+};
+
+static struct config_item_type nvmet_ns_type = {
+ .ct_item_ops = &nvmet_ns_item_ops,
+ .ct_attrs = NULL,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group *nvmet_make_namespace(struct config_group *group,
+ const char *name)
+{
+ struct nvmet_subsys *subsys = namespaces_to_subsys(&group->cg_item);
+ struct nvmet_ns *ns;
+ int ret;
+ u32 nsid;
+
+ ret = kstrtou32(name, 0, &nsid);
+ if (ret)
+ goto out;
+
+ ret = -EINVAL;
+ if (nsid == 0 || nsid == 0xffffffff)
+ goto out;
+
+ ret = -ENOMEM;
+ ns = nvmet_ns_alloc(subsys, nsid);
+ if (!ns)
+ goto out;
+ config_group_init_type_name(&ns->group, name, &nvmet_ns_type);
+
+ pr_info("adding nsid %d to subsystem %s\n", nsid, subsys->subsysnqn);
+
+ return &ns->group;
+out:
+ return ERR_PTR(ret);
+}
+
+static void nvmet_drop_namespace(struct config_group *group, struct config_item *item)
+{
+ /*
+ * struct nvmet_ns is released via nvmet_ns_release()
+ */
+ config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_namespaces_group_ops = {
+ .make_group = nvmet_make_namespace,
+ .drop_item = nvmet_drop_namespace,
+};
+
+static struct config_item_type nvmet_namespaces_type = {
+ .ct_group_ops = &nvmet_namespaces_group_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+/*
+ * Subsystem structures & folder operation functions below
+ */
+static void nvmet_subsys_release(struct config_item *item)
+{
+ struct nvmet_subsys *subsys = to_subsys(item);
+
+ nvmet_subsys_put(subsys);
+}
+
+static struct configfs_item_operations nvmet_subsys_item_ops = {
+ .release = nvmet_subsys_release,
+};
+
+static struct config_item_type nvmet_subsys_type = {
+ .ct_item_ops = &nvmet_subsys_item_ops,
+// .ct_attrs = nvmet_subsys_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group *nvmet_make_subsys(struct config_group *group,
+ const char *name)
+{
+ struct nvmet_subsys *subsys;
+
+ if (sysfs_streq(name, NVME_DISC_SUBSYS_NAME)) {
+ pr_err("can't create discovery subsystem through configfs\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ subsys = nvmet_subsys_alloc(name, NVME_NQN_NVME);
+ if (!subsys)
+ return ERR_PTR(-ENOMEM);
+
+ config_group_init_type_name(&subsys->group, name, &nvmet_subsys_type);
+
+ config_group_init_type_name(&subsys->namespaces_group,
+ "namespaces", &nvmet_namespaces_type);
+ configfs_add_default_group(&subsys->namespaces_group, &subsys->group);
+
+ config_group_init_type_name(&subsys->ports_group,
+ "ports", &nvmet_ports_type);
+ configfs_add_default_group(&subsys->ports_group, &subsys->group);
+
+#if 0
+ config_group_init_type_name(&subsys->allowed_hosts_group,
+ "allowed_hosts", &nvmet_allowed_hosts_type);
+ configfs_add_default_group(&subsys->allowed_hosts_group,
+ &subsys->group);
+#endif
+// XXX: subsys->allow_any_host hardcoded to true
+ subsys->allow_any_host = true;
+
+ return &subsys->group;
+}
+
+static void nvmet_drop_subsys(struct config_group *group, struct config_item *item)
+{
+ /*
+ * struct nvmet_port is releated via nvmet_subsys_release()
+ */
+ config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_subsystems_group_ops = {
+ .make_group = nvmet_make_subsys,
+ .drop_item = nvmet_drop_subsys,
+};
+
+static struct config_item_type nvmet_subsystems_type = {
+ .ct_group_ops = &nvmet_subsystems_group_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group nvmet_subsystems_group;
+
+static struct config_item_type nvmet_root_type = {
+ .ct_owner = THIS_MODULE,
+};
+
+static struct configfs_subsystem nvmet_configfs_subsystem = {
+ .su_group = {
+ .cg_item = {
+ .ci_namebuf = "nvmet",
+ .ci_type = &nvmet_root_type,
+ },
+ },
+};
+
+int __init nvmet_init_configfs(void)
+{
+ int ret;
+
+ config_group_init(&nvmet_configfs_subsystem.su_group);
+ mutex_init(&nvmet_configfs_subsystem.su_mutex);
+
+ config_group_init_type_name(&nvmet_subsystems_group,
+ "subsystems", &nvmet_subsystems_type);
+ configfs_add_default_group(&nvmet_subsystems_group,
+ &nvmet_configfs_subsystem.su_group);
+
+ ret = configfs_register_subsystem(&nvmet_configfs_subsystem);
+ if (ret) {
+ pr_err("configfs_register_subsystem: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+void __exit nvmet_exit_configfs(void)
+{
+ configfs_unregister_subsystem(&nvmet_configfs_subsystem);
+}
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index aebe646..d355a36 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -441,7 +441,9 @@ static int nvmet_port_subsys_allow_link(struct config_item *parent,
if (!link)
return -ENOMEM;
link->subsys = subsys;
-
+#if 1
+ BUG_ON(1);
+#else
down_write(&nvmet_config_sem);
ret = -EEXIST;
list_for_each_entry(p, &port->subsystems, entry) {
@@ -458,6 +460,7 @@ static int nvmet_port_subsys_allow_link(struct config_item *parent,
list_add_tail(&link->entry, &port->subsystems);
nvmet_genctr++;
up_write(&nvmet_config_sem);
+#endif
return 0;
out_free_link:
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 9af813c..7b42d2b 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -696,10 +696,8 @@ static bool __nvmet_host_allowed(struct nvmet_subsys *subsys,
static bool nvmet_host_discovery_allowed(struct nvmet_req *req,
const char *hostnqn)
{
- struct nvmet_subsys_link *s;
-
- list_for_each_entry(s, &req->port->subsystems, entry) {
- if (__nvmet_host_allowed(s->subsys, hostnqn))
+ if (req->port && req->port->nf_subsys) {
+ if (__nvmet_host_allowed(req->port->nf_subsys, hostnqn))
return true;
}
@@ -874,8 +872,6 @@ EXPORT_SYMBOL_GPL(nvmet_ctrl_fatal_error);
static struct nvmet_subsys *nvmet_find_get_subsys(struct nvmet_port *port,
const char *subsysnqn)
{
- struct nvmet_subsys_link *p;
-
if (!port)
return NULL;
@@ -886,17 +882,11 @@ static struct nvmet_subsys *nvmet_find_get_subsys(struct nvmet_port *port,
return nvmet_disc_subsys;
}
- down_read(&nvmet_config_sem);
- list_for_each_entry(p, &port->subsystems, entry) {
- if (!strncmp(p->subsys->subsysnqn, subsysnqn,
- NVMF_NQN_SIZE)) {
- if (!kref_get_unless_zero(&p->subsys->ref))
- break;
- up_read(&nvmet_config_sem);
- return p->subsys;
- }
+ if (port->nf_subsys) {
+ if (kref_get_unless_zero(&port->nf_subsys->ref))
+ return port->nf_subsys;
}
- up_read(&nvmet_config_sem);
+
return NULL;
}
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 2bf15088b..db12e06 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -98,6 +98,9 @@ struct nvmet_port {
struct list_head referrals;
void *priv;
bool enabled;
+
+ struct nvmet_subsys *nf_subsys;
+ struct nvmet_fabrics_ops *nf_ops;
};
static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
@@ -158,6 +161,7 @@ struct nvmet_subsys {
struct config_group group;
struct config_group namespaces_group;
+ struct config_group ports_group;
struct config_group allowed_hosts_group;
};
@@ -173,6 +177,13 @@ static inline struct nvmet_subsys *namespaces_to_subsys(
namespaces_group);
}
+static inline struct nvmet_subsys *ports_to_subsys(
+ struct config_item *item)
+{
+ return container_of(to_config_group(item), struct nvmet_subsys,
+ ports_group);
+}
+
struct nvmet_host {
struct config_group group;
};
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 3/8] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 1/8] nvmet: Add nvmet_fabric_ops get/put transport helpers Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 2/8] nvmet: Add support for configfs-ng multi-tenant logic Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 4/8] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops Nicholas A. Bellinger
` (4 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch hooks up nvmet_ns_enable() to accept the RCU protected
struct se_device provided as a configfs symlink from existing
/sys/kernel/config/target/core/ driver backends.
Also, drop the now unused internal ns->bdev + ns->device_path
usage, and add WIP stubs for nvmet/io-cmd sbc_ops backend
conversion to be added in subsequent patches.
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/configfs-ng.c | 3 +--
drivers/nvme/target/core.c | 30 ++++++++----------------------
drivers/nvme/target/io-cmd.c | 17 +++++++++++++++--
drivers/nvme/target/nvmet.h | 6 ++----
4 files changed, 26 insertions(+), 30 deletions(-)
diff --git a/drivers/nvme/target/configfs-ng.c b/drivers/nvme/target/configfs-ng.c
index d495017..e160186 100644
--- a/drivers/nvme/target/configfs-ng.c
+++ b/drivers/nvme/target/configfs-ng.c
@@ -392,8 +392,7 @@ static int nvmet_ns_link(struct config_item *ns_ci, struct config_item *dev_ci)
return -ENOSYS;
}
- // XXX: Pass in struct se_device into nvmet_ns_enable
- return nvmet_ns_enable(ns);
+ return nvmet_ns_enable(ns, dev);
}
static int nvmet_ns_unlink(struct config_item *ns_ci, struct config_item *dev_ci)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 7b42d2b..171e440 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -13,6 +13,8 @@
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
+#include <target/target_core_base.h>
+#include <target/target_core_backend.h>
#include "nvmet.h"
static struct nvmet_fabrics_ops *nvmet_transports[NVMF_TRTYPE_MAX];
@@ -287,7 +289,7 @@ void nvmet_put_namespace(struct nvmet_ns *ns)
percpu_ref_put(&ns->ref);
}
-int nvmet_ns_enable(struct nvmet_ns *ns)
+int nvmet_ns_enable(struct nvmet_ns *ns, struct se_device *dev)
{
struct nvmet_subsys *subsys = ns->subsys;
struct nvmet_ctrl *ctrl;
@@ -297,23 +299,14 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
if (!list_empty(&ns->dev_link))
goto out_unlock;
- ns->bdev = blkdev_get_by_path(ns->device_path, FMODE_READ | FMODE_WRITE,
- NULL);
- if (IS_ERR(ns->bdev)) {
- pr_err("nvmet: failed to open block device %s: (%ld)\n",
- ns->device_path, PTR_ERR(ns->bdev));
- ret = PTR_ERR(ns->bdev);
- ns->bdev = NULL;
- goto out_unlock;
- }
-
- ns->size = i_size_read(ns->bdev->bd_inode);
- ns->blksize_shift = blksize_bits(bdev_logical_block_size(ns->bdev));
+ rcu_assign_pointer(ns->dev, dev);
+ ns->size = dev->transport->get_blocks(dev) * dev->dev_attrib.hw_block_size;
+ ns->blksize_shift = blksize_bits(dev->dev_attrib.hw_block_size);
ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace,
0, GFP_KERNEL);
if (ret)
- goto out_blkdev_put;
+ goto out_unlock;
if (ns->nsid > subsys->max_nsid)
subsys->max_nsid = ns->nsid;
@@ -343,10 +336,6 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
out_unlock:
mutex_unlock(&subsys->lock);
return ret;
-out_blkdev_put:
- blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
- ns->bdev = NULL;
- goto out_unlock;
}
void nvmet_ns_disable(struct nvmet_ns *ns)
@@ -379,16 +368,13 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, 0, 0);
- if (ns->bdev)
- blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
+ rcu_assign_pointer(ns->dev, NULL);
mutex_unlock(&subsys->lock);
}
void nvmet_ns_free(struct nvmet_ns *ns)
{
nvmet_ns_disable(ns);
-
- kfree(ns->device_path);
kfree(ns);
}
diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 76dbf73..38c2e97 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -16,6 +16,7 @@
#include <linux/module.h>
#include "nvmet.h"
+#if 0
static void nvmet_bio_done(struct bio *bio)
{
struct nvmet_req *req = bio->bi_private;
@@ -26,6 +27,7 @@ static void nvmet_bio_done(struct bio *bio)
if (bio != &req->inline_bio)
bio_put(bio);
}
+#endif
static inline u32 nvmet_rw_len(struct nvmet_req *req)
{
@@ -33,6 +35,7 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req)
req->ns->blksize_shift;
}
+#if 0
static void nvmet_inline_bio_init(struct nvmet_req *req)
{
struct bio *bio = &req->inline_bio;
@@ -41,21 +44,23 @@ static void nvmet_inline_bio_init(struct nvmet_req *req)
bio->bi_max_vecs = NVMET_MAX_INLINE_BIOVEC;
bio->bi_io_vec = req->inline_bvec;
}
+#endif
static void nvmet_execute_rw(struct nvmet_req *req)
{
+#if 0
int sg_cnt = req->sg_cnt;
struct scatterlist *sg;
struct bio *bio;
sector_t sector;
blk_qc_t cookie;
int rw, i;
-
+#endif
if (!req->sg_cnt) {
nvmet_req_complete(req, 0);
return;
}
-
+#if 0
if (req->cmd->rw.opcode == nvme_cmd_write) {
if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
rw = WRITE_FUA;
@@ -95,10 +100,12 @@ static void nvmet_execute_rw(struct nvmet_req *req)
cookie = submit_bio(rw, bio);
blk_poll(bdev_get_queue(req->ns->bdev), cookie);
+#endif
}
static void nvmet_execute_flush(struct nvmet_req *req)
{
+#if 0
struct bio *bio;
nvmet_inline_bio_init(req);
@@ -109,8 +116,10 @@ static void nvmet_execute_flush(struct nvmet_req *req)
bio->bi_end_io = nvmet_bio_done;
submit_bio(WRITE_FLUSH, bio);
+#endif
}
+#if 0
static u16 nvmet_discard_range(struct nvmet_ns *ns,
struct nvme_dsm_range *range, int type, struct bio **bio)
{
@@ -119,11 +128,14 @@ static u16 nvmet_discard_range(struct nvmet_ns *ns,
le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
GFP_KERNEL, type, bio))
return NVME_SC_INTERNAL | NVME_SC_DNR;
+
return 0;
}
+#endif
static void nvmet_execute_discard(struct nvmet_req *req)
{
+#if 0
struct nvme_dsm_range range;
struct bio *bio = NULL;
int type = REQ_WRITE | REQ_DISCARD, i;
@@ -152,6 +164,7 @@ static void nvmet_execute_discard(struct nvmet_req *req)
} else {
nvmet_req_complete(req, status);
}
+#endif
}
static void nvmet_execute_dsm(struct nvmet_req *req)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index db12e06..16c3fa1 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -41,15 +41,13 @@
struct nvmet_ns {
struct list_head dev_link;
struct percpu_ref ref;
- struct block_device *bdev;
+ struct se_device __rcu *dev;
u32 nsid;
u32 blksize_shift;
loff_t size;
u8 nguid[16];
struct nvmet_subsys *subsys;
- const char *device_path;
-
struct config_group device_group;
struct config_group group;
@@ -299,7 +297,7 @@ void nvmet_subsys_put(struct nvmet_subsys *subsys);
struct nvmet_ns *nvmet_find_namespace(struct nvmet_ctrl *ctrl, __le32 nsid);
void nvmet_put_namespace(struct nvmet_ns *ns);
-int nvmet_ns_enable(struct nvmet_ns *ns);
+int nvmet_ns_enable(struct nvmet_ns *ns, struct se_device *dev);
void nvmet_ns_disable(struct nvmet_ns *ns);
struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid);
void nvmet_ns_free(struct nvmet_ns *ns);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 4/8] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
` (2 preceding siblings ...)
2016-06-07 6:36 ` [RFC 3/8] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 5/8] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache " Nicholas A. Bellinger
` (3 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch converts nvmet_execute_rw() to utilize sbc_ops->execute_rw()
for target_iostate + target_iomem based I/O submission into existing
backends drivers via configfs in /sys/kernel/config/target/core/.
This includes support for passing T10-PI scatterlists via target_iomem
into existing sbc_ops->execute_rw() logic, and is functioning with
IBLOCK, FILEIO, and RAMDISK.
Note the preceeding target/iblock patch absorbs inline bio + bvecs
and blk_poll() optimizations from Ming + Sagi in nvmet/io-cmd into
target_core_iblock.c code.
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/io-cmd.c | 116 ++++++++++++++++++++++---------------------
drivers/nvme/target/nvmet.h | 7 +++
2 files changed, 67 insertions(+), 56 deletions(-)
diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 38c2e97..133a14a 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -14,20 +14,16 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/blkdev.h>
#include <linux/module.h>
+#include <target/target_core_base.h>
+#include <target/target_core_backend.h>
#include "nvmet.h"
-#if 0
-static void nvmet_bio_done(struct bio *bio)
+static void nvmet_complete_ios(struct target_iostate *ios, u16 status)
{
- struct nvmet_req *req = bio->bi_private;
-
- nvmet_req_complete(req,
- bio->bi_error ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
+ struct nvmet_req *req = container_of(ios, struct nvmet_req, t_iostate);
- if (bio != &req->inline_bio)
- bio_put(bio);
+ nvmet_req_complete(req, status ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
}
-#endif
static inline u32 nvmet_rw_len(struct nvmet_req *req)
{
@@ -35,72 +31,80 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req)
req->ns->blksize_shift;
}
-#if 0
-static void nvmet_inline_bio_init(struct nvmet_req *req)
-{
- struct bio *bio = &req->inline_bio;
-
- bio_init(bio);
- bio->bi_max_vecs = NVMET_MAX_INLINE_BIOVEC;
- bio->bi_io_vec = req->inline_bvec;
-}
-#endif
-
static void nvmet_execute_rw(struct nvmet_req *req)
{
-#if 0
- int sg_cnt = req->sg_cnt;
- struct scatterlist *sg;
- struct bio *bio;
+ struct target_iostate *ios = &req->t_iostate;
+ struct target_iomem *iomem = &req->t_iomem;
+ struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+ struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
sector_t sector;
- blk_qc_t cookie;
- int rw, i;
-#endif
+ enum dma_data_direction data_direction;
+ sense_reason_t rc;
+ bool fua_write = false, prot_enabled = false;
+
+ if (!sbc_ops || !sbc_ops->execute_rw) {
+ nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+ return;
+ }
+
if (!req->sg_cnt) {
nvmet_req_complete(req, 0);
return;
}
-#if 0
+
if (req->cmd->rw.opcode == nvme_cmd_write) {
if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
- rw = WRITE_FUA;
- else
- rw = WRITE;
+ fua_write = true;
+
+ data_direction = DMA_TO_DEVICE;
} else {
- rw = READ;
+ data_direction = DMA_FROM_DEVICE;
}
sector = le64_to_cpu(req->cmd->rw.slba);
sector <<= (req->ns->blksize_shift - 9);
- nvmet_inline_bio_init(req);
- bio = &req->inline_bio;
- bio->bi_bdev = req->ns->bdev;
- bio->bi_iter.bi_sector = sector;
- bio->bi_private = req;
- bio->bi_end_io = nvmet_bio_done;
-
- for_each_sg(req->sg, sg, req->sg_cnt, i) {
- while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
- != sg->length) {
- struct bio *prev = bio;
-
- bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
- bio->bi_bdev = req->ns->bdev;
- bio->bi_iter.bi_sector = sector;
-
- bio_chain(bio, prev);
- cookie = submit_bio(rw, prev);
- }
+ ios->t_task_lba = sector;
+ ios->data_length = nvmet_rw_len(req);
+ ios->data_direction = data_direction;
+ iomem->t_data_sg = req->sg;
+ iomem->t_data_nents = req->sg_cnt;
+ iomem->t_prot_sg = req->prot_sg;
+ iomem->t_prot_nents = req->prot_sg_cnt;
+
+ // XXX: Make common between sbc_check_prot and nvme-target
+ switch (dev->dev_attrib.pi_prot_type) {
+ case TARGET_DIF_TYPE3_PROT:
+ ios->reftag_seed = 0xffffffff;
+ prot_enabled = true;
+ break;
+ case TARGET_DIF_TYPE1_PROT:
+ ios->reftag_seed = ios->t_task_lba;
+ prot_enabled = true;
+ break;
+ default:
+ break;
+ }
- sector += sg->length >> 9;
- sg_cnt--;
+ if (prot_enabled) {
+ ios->prot_type = dev->dev_attrib.pi_prot_type;
+ ios->prot_length = dev->prot_length *
+ (le16_to_cpu(req->cmd->rw.length) + 1);
+#if 0
+ printk("req->cmd->rw.length: %u\n", le16_to_cpu(req->cmd->rw.length));
+ printk("nvmet_rw_len: %u\n", nvmet_rw_len(req));
+ printk("req->se_cmd.prot_type: %d\n", req->se_cmd.prot_type);
+ printk("req->se_cmd.prot_length: %u\n", req->se_cmd.prot_length);
+#endif
}
- cookie = submit_bio(rw, bio);
+ ios->se_dev = dev;
+ ios->iomem = iomem;
+ ios->t_comp_func = &nvmet_complete_ios;
- blk_poll(bdev_get_queue(req->ns->bdev), cookie);
-#endif
+ rc = sbc_ops->execute_rw(ios, iomem->t_data_sg, iomem->t_data_nents,
+ ios->data_direction, fua_write,
+ &nvmet_complete_ios);
}
static void nvmet_execute_flush(struct nvmet_req *req)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 16c3fa1..73f1df7 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -25,6 +25,7 @@
#include <linux/configfs.h>
#include <linux/rcupdate.h>
#include <linux/blkdev.h>
+#include <target/target_core_base.h>
#define NVMET_ASYNC_EVENTS 4
#define NVMET_ERROR_LOG_SLOTS 128
@@ -233,6 +234,12 @@ struct nvmet_req {
int sg_cnt;
size_t data_len;
+ struct scatterlist *prot_sg;
+ int prot_sg_cnt;
+
+ struct target_iostate t_iostate;
+ struct target_iomem t_iomem;
+
struct nvmet_port *port;
void (*execute)(struct nvmet_req *req);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 5/8] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache backend ops
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
` (3 preceding siblings ...)
2016-06-07 6:36 ` [RFC 4/8] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 6/8] nvmet/io-cmd: Hookup sbc_ops->execute_unmap " Nicholas A. Bellinger
` (2 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch converts nvmet_execute_flush() to utilize
sbc_ops->execute_sync_cache() for target_iostate
submission into existing backends drivers via
configfs in /sys/kernel/config/target/core/.
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/io-cmd.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 133a14a..23905a8 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -109,18 +109,21 @@ static void nvmet_execute_rw(struct nvmet_req *req)
static void nvmet_execute_flush(struct nvmet_req *req)
{
-#if 0
- struct bio *bio;
+ struct target_iostate *ios = &req->t_iostate;
+ struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+ struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
+ sense_reason_t rc;
- nvmet_inline_bio_init(req);
- bio = &req->inline_bio;
+ if (!sbc_ops || !sbc_ops->execute_sync_cache) {
+ nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+ return;
+ }
- bio->bi_bdev = req->ns->bdev;
- bio->bi_private = req;
- bio->bi_end_io = nvmet_bio_done;
+ ios->se_dev = dev;
+ ios->iomem = NULL;
+ ios->t_comp_func = &nvmet_complete_ios;
- submit_bio(WRITE_FLUSH, bio);
-#endif
+ rc = sbc_ops->execute_sync_cache(ios, false);
}
#if 0
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 6/8] nvmet/io-cmd: Hookup sbc_ops->execute_unmap backend ops
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
` (4 preceding siblings ...)
2016-06-07 6:36 ` [RFC 5/8] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache " Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 7/8] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 8/8] nvme/loop: Add support for bio integrity handling Nicholas A. Bellinger
7 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch converts nvmet_execute_discard() to utilize
sbc_ops->execute_unmap() for target_iostate submission
into existing backends drivers via configfs in
/sys/kernel/config/target/core/.
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/io-cmd.c | 47 ++++++++++++++++++++++++++++++++------------
1 file changed, 34 insertions(+), 13 deletions(-)
diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 23905a8..605f560 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -126,52 +126,73 @@ static void nvmet_execute_flush(struct nvmet_req *req)
rc = sbc_ops->execute_sync_cache(ios, false);
}
-#if 0
-static u16 nvmet_discard_range(struct nvmet_ns *ns,
- struct nvme_dsm_range *range, int type, struct bio **bio)
+static u16 nvmet_discard_range(struct nvmet_req *req, struct sbc_ops *sbc_ops,
+ struct nvme_dsm_range *range, struct bio **bio)
{
- if (__blkdev_issue_discard(ns->bdev,
+ struct nvmet_ns *ns = req->ns;
+ sense_reason_t rc;
+
+ rc = sbc_ops->execute_unmap(&req->t_iostate,
le64_to_cpu(range->slba) << (ns->blksize_shift - 9),
le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
- GFP_KERNEL, type, bio))
+ bio);
+ if (rc)
return NVME_SC_INTERNAL | NVME_SC_DNR;
return 0;
}
-#endif
+
+static void nvmet_discard_bio_done(struct bio *bio)
+{
+ struct nvmet_req *req = bio->bi_private;
+ int err = bio->bi_error;
+
+ bio_put(bio);
+ nvmet_req_complete(req, err ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
+}
static void nvmet_execute_discard(struct nvmet_req *req)
{
-#if 0
- struct nvme_dsm_range range;
+ struct target_iostate *ios = &req->t_iostate;
+ struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+ struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
struct bio *bio = NULL;
- int type = REQ_WRITE | REQ_DISCARD, i;
+ struct nvme_dsm_range range;
+ int i;
u16 status;
+ if (!sbc_ops || !sbc_ops->execute_unmap) {
+ nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+ return;
+ }
+
+ ios->se_dev = dev;
+ ios->iomem = NULL;
+ ios->t_comp_func = NULL;
+
for (i = 0; i <= le32_to_cpu(req->cmd->dsm.nr); i++) {
status = nvmet_copy_from_sgl(req, i * sizeof(range), &range,
sizeof(range));
if (status)
break;
- status = nvmet_discard_range(req->ns, &range, type, &bio);
+ status = nvmet_discard_range(req, sbc_ops, &range, &bio);
if (status)
break;
}
if (bio) {
bio->bi_private = req;
- bio->bi_end_io = nvmet_bio_done;
+ bio->bi_end_io = nvmet_discard_bio_done;
if (status) {
bio->bi_error = -EIO;
bio_endio(bio);
} else {
- submit_bio(type, bio);
+ submit_bio(REQ_WRITE | REQ_DISCARD, bio);
}
} else {
nvmet_req_complete(req, status);
}
-#endif
}
static void nvmet_execute_dsm(struct nvmet_req *req)
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 7/8] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
` (5 preceding siblings ...)
2016-06-07 6:36 ` [RFC 6/8] nvmet/io-cmd: Hookup sbc_ops->execute_unmap " Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
2016-06-09 13:52 ` Christoph Hellwig
2016-06-07 6:36 ` [RFC 8/8] nvme/loop: Add support for bio integrity handling Nicholas A. Bellinger
7 siblings, 1 reply; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch updates nvmet_execute_identify_ns() to report
target-core backend T10-PI related feature bits to the
NVMe host controller.
Note this assumes support for NVME_NS_DPC_PI_TYPE1 and
NVME_NS_DPC_PI_TYPE3 as reported by backend drivers via
/sys/kernel/config/target/core/*/*/attrib/pi_prot_type.
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/admin-cmd.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 240e323..3a808dc 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -200,6 +200,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
{
struct nvmet_ns *ns;
struct nvme_id_ns *id;
+ struct se_device *dev;
u16 status = 0;
ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
@@ -228,6 +229,22 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
id->nlbaf = 0;
id->flbas = 0;
+ /* Populate bits for T10-PI from se_device backend */
+ rcu_read_lock();
+ dev = rcu_dereference(ns->dev);
+ if (dev && dev->dev_attrib.pi_prot_type) {
+ int pi_prot_type = dev->dev_attrib.pi_prot_type;
+
+ id->lbaf[0].ms = cpu_to_le16(sizeof(struct t10_pi_tuple));
+ printk("nvmet_set_id_ns: ms: %u\n", id->lbaf[0].ms);
+
+ if (pi_prot_type == 1)
+ id->dps = NVME_NS_DPC_PI_TYPE1;
+ else if (pi_prot_type == 3)
+ id->dps = NVME_NS_DPC_PI_TYPE3;
+ }
+ rcu_read_unlock();
+
/*
* Our namespace might always be shared. Not just with other
* controllers, but also with any other user of the block device.
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 8/8] nvme/loop: Add support for bio integrity handling
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
` (6 preceding siblings ...)
2016-06-07 6:36 ` [RFC 7/8] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits Nicholas A. Bellinger
@ 2016-06-07 6:36 ` Nicholas A. Bellinger
7 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-07 6:36 UTC (permalink / raw)
To: target-devel
Cc: linux-nvme, linux-scsi, Jens Axboe, Christoph Hellwig,
Martin Petersen, Sagi Grimberg, Hannes Reinecke, Mike Christie,
Dave B Minturn, Nicholas Bellinger
From: Nicholas Bellinger <nab@linux-iscsi.org>
This patch adds support for nvme/loop block integrity,
based upon the reported ID_NS.ms + ID_NS.dps feature
bits in nvmet_execute_identify_ns().
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
drivers/nvme/target/loop.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 08b4fbb..b4b4da9 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -42,6 +42,7 @@ struct nvme_loop_iod {
struct nvme_loop_queue *queue;
struct work_struct work;
struct sg_table sg_table;
+ struct scatterlist meta_sg;
struct scatterlist first_sgl[];
};
@@ -193,6 +194,24 @@ static int nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
BUG_ON(iod->req.sg_cnt > req->nr_phys_segments);
}
+ if (blk_integrity_rq(req)) {
+ int count;
+
+ if (blk_rq_count_integrity_sg(hctx->queue, req->bio) != 1)
+ BUG_ON(1);
+
+ sg_init_table(&iod->meta_sg, 1);
+ count = blk_rq_map_integrity_sg(hctx->queue, req->bio,
+ &iod->meta_sg);
+
+ iod->req.prot_sg = &iod->meta_sg;
+ iod->req.prot_sg_cnt = 1;
+#if 0
+ printk("nvme/loop: Set prot_sg %p and prot_sg_cnt: %d\n",
+ iod->req.prot_sg, iod->req.prot_sg_cnt);
+#endif
+ }
+
iod->cmd.common.command_id = req->tag;
blk_mq_start_request(req);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [RFC 7/8] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits
2016-06-07 6:36 ` [RFC 7/8] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits Nicholas A. Bellinger
@ 2016-06-09 13:52 ` Christoph Hellwig
2016-06-10 6:55 ` Nicholas A. Bellinger
0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2016-06-09 13:52 UTC (permalink / raw)
To: Nicholas A. Bellinger
Cc: target-devel, linux-nvme, linux-scsi, Jens Axboe,
Christoph Hellwig, Martin Petersen, Sagi Grimberg,
Hannes Reinecke, Mike Christie, Dave B Minturn
FYI: NVMf requires metadata to be interleaved in the data, and you
need to indicate that in the Identify data. Note that this is only
a requirement for the on the wire format and for the way the Namespaces are
exposed at the protocol level as RDMA HCA and FB HBAs should still
be able to handle our separate SGL.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC 7/8] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits
2016-06-09 13:52 ` Christoph Hellwig
@ 2016-06-10 6:55 ` Nicholas A. Bellinger
0 siblings, 0 replies; 11+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-10 6:55 UTC (permalink / raw)
To: Christoph Hellwig
Cc: target-devel, linux-nvme, linux-scsi, Jens Axboe, Martin Petersen,
Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn
On Thu, 2016-06-09 at 15:52 +0200, Christoph Hellwig wrote:
> FYI: NVMf requires metadata to be interleaved in the data, and you
> need to indicate that in the Identify data. Note that this is only
> a requirement for the on the wire format and for the way the Namespaces are
> exposed at the protocol level as RDMA HCA and FB HBAs should still
> be able to handle our separate SGL.
Btw, nvmet needs something similar for controller creation, like how
target_port_op works with target_alloc_session(), so hw capabilities can
signal to host controllers when PI should be enabled for a namespace,
but actual namespace backends don't support it.
Eg: TARGET_PROT_DOUT_STRIP and TARGET_PROT_DIN_INSERT ops
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-06-10 6:55 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-07 6:36 [RFC 0/8] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 1/8] nvmet: Add nvmet_fabric_ops get/put transport helpers Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 2/8] nvmet: Add support for configfs-ng multi-tenant logic Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 3/8] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 4/8] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 5/8] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache " Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 6/8] nvmet/io-cmd: Hookup sbc_ops->execute_unmap " Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 7/8] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits Nicholas A. Bellinger
2016-06-09 13:52 ` Christoph Hellwig
2016-06-10 6:55 ` Nicholas A. Bellinger
2016-06-07 6:36 ` [RFC 8/8] nvme/loop: Add support for bio integrity handling Nicholas A. Bellinger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).