linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs
@ 2016-06-14  4:35 Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 01/11] nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary Nicholas A. Bellinger
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

Hi folks,

Here's the second pass of a nvmet multi-tenant configfs layout,
following what we've learned in target_core_fabric_configfs.c
wrt to independent operation of storage endpoints.

Namely, it allows existing /sys/kernel/config/target/core/ backends
to be configfs symlinked into ../nvmet/subsystems/$SUBSYS_NQN/
as nvme namespaces.

Here is how the running RFC-v2 code currently looks:

/sys/kernel/config/nvmet/subsystems/
└── nqn.2003-01.org.linux-iscsi.NVMf.skylake-ep
    ├── hosts
    ├── namespaces
    │   └── 1
    │       └── ramdisk0 -> ../../../../../target/core/rd_mcp_1/ramdisk0
    └── ports
        └── loop
            ├── addr_adrfam
            ├── addr_portid
            ├── addr_traddr
            ├── addr_treq
            ├── addr_trsvcid
            ├── addr_trtype
            └── enable

The series exposes T10-PI from /sys/kernel/config/target/core/ as
ID_NS.ms + ID_NS.dps feature bits, and enables block integrity
support with nvmet/loop driver.

Note this series depends upon the following prerequisites of
target-core:

  http://marc.info/?l=linux-scsi&m=146527281416606&w=2

and of course, last week's earlier release of nvmet + friends:

  http://lists.infradead.org/pipermail/linux-nvme/2016-June/004754.html

Note the full set of patches is available from:

  https://git.kernel.org/cgit/linux/kernel/git/nab/target-pending.git/log/?h=nvmet-configfs-ng

v2 changes:

  - Introduce struct nvmet_port_binding in configfs-ng.c, in order
    to support 1:N mappings.
  - Convert nvmet_find_get_subsys() + discovery.c logic to use
    nvmet_port->port_binding_list.
  - Convert nvmet/loop to use nvmet_port_binding. (1:1 mapping)
  - Convert nvmet/rdma to use nvmet_port_binding + nvmet_rdma_ports.
    (1:N mapping)
  - Export nvmf_host_add + generate hostnqn if necessary. (HCH)
  - Make nvmet/loop multi-controller allocate it's own nvmf_host
    per controller. (HCH)
  - Change nvmet_fabric_ops get/put to use nvmf_disc_rsp_page_entry.
  - Convert nvmet_genctr to atomic_long_t.
  - Enable ../nvmet/subsystems/$NQN/hosts/$HOSTNQN group usage
    in configfs-ng.c.

Comments..?

--nab

Nicholas Bellinger (11):
  nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary
  nvmet: Add nvmet_fabric_ops get/put transport helpers
  nvmet: Add support for configfs-ng multi-tenant logic
  nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable
  nvmet/loop: Add support for controller-per-port model +
    nvmet_port_binding
  nvmet/rdma: Convert to struct nvmet_port_binding
  nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops
  nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache backend ops
  nvmet/io-cmd: Hookup sbc_ops->execute_unmap backend ops
  nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits
  nvmet/loop: Add support for bio integrity handling

 drivers/nvme/host/fabrics.c       |  18 +-
 drivers/nvme/host/fabrics.h       |   1 +
 drivers/nvme/target/Makefile      |   2 +-
 drivers/nvme/target/admin-cmd.c   |  17 +
 drivers/nvme/target/configfs-ng.c | 661 ++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/configfs.c    |  12 +-
 drivers/nvme/target/core.c        | 153 ++++++---
 drivers/nvme/target/discovery.c   |  31 +-
 drivers/nvme/target/io-cmd.c      | 169 ++++++----
 drivers/nvme/target/loop.c        | 223 +++++++++++--
 drivers/nvme/target/nvmet.h       |  67 +++-
 drivers/nvme/target/rdma.c        | 127 +++++++-
 12 files changed, 1317 insertions(+), 164 deletions(-)
 create mode 100644 drivers/nvme/target/configfs-ng.c

-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC-v2 01/11] nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 02/11] nvmet: Add nvmet_fabric_ops get/put transport helpers Nicholas A. Bellinger
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch allows nvmf_host_add() to be used externally,
and optionally if no hostnqn is passed will generate a
hostnqn based on host->id, following nvmf_host_default().

Note it's required for nvme-loop multi-controller support,
in order to drive nvmet_port creation directly via configfs
attribute write from in ../nvmet/subsystems/$NQN/ports/$PORT/
group context.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Jay Freyensee <james.p.freyensee@intel.com>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/host/fabrics.c | 18 +++++++++++++-----
 drivers/nvme/host/fabrics.h |  1 +
 2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index ee4b7f1..2e0086a 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -41,28 +41,36 @@ static struct nvmf_host *__nvmf_host_find(const char *hostnqn)
 	return NULL;
 }
 
-static struct nvmf_host *nvmf_host_add(const char *hostnqn)
+struct nvmf_host *nvmf_host_add(const char *hostnqn)
 {
 	struct nvmf_host *host;
 
 	mutex_lock(&nvmf_hosts_mutex);
-	host = __nvmf_host_find(hostnqn);
-	if (host)
-		goto out_unlock;
+	if (hostnqn) {
+		host = __nvmf_host_find(hostnqn);
+		if (host)
+			goto out_unlock;
+	}
 
 	host = kmalloc(sizeof(*host), GFP_KERNEL);
 	if (!host)
 		goto out_unlock;
 
 	kref_init(&host->ref);
-	memcpy(host->nqn, hostnqn, NVMF_NQN_SIZE);
 	uuid_le_gen(&host->id);
 
+	if (hostnqn)
+		memcpy(host->nqn, hostnqn, NVMF_NQN_SIZE);
+	else
+		snprintf(host->nqn, NVMF_NQN_SIZE,
+			"nqn.2014-08.org.nvmexpress:NVMf:uuid:%pUl", &host->id);
+
 	list_add_tail(&host->list, &nvmf_hosts);
 out_unlock:
 	mutex_unlock(&nvmf_hosts_mutex);
 	return host;
 }
+EXPORT_SYMBOL_GPL(nvmf_host_add);
 
 static struct nvmf_host *nvmf_host_default(void)
 {
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index b540674..956eab4 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -128,6 +128,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl);
 int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid);
 void nvmf_register_transport(struct nvmf_transport_ops *ops);
 void nvmf_unregister_transport(struct nvmf_transport_ops *ops);
+struct nvmf_host *nvmf_host_add(const char *hostnqn);
 void nvmf_free_options(struct nvmf_ctrl_options *opts);
 const char *nvmf_get_subsysnqn(struct nvme_ctrl *ctrl);
 int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 02/11] nvmet: Add nvmet_fabric_ops get/put transport helpers
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 01/11] nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 03/11] nvmet: Add support for configfs-ng multi-tenant logic Nicholas A. Bellinger
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch introduces two helpers for obtaining + releasing
nvmet_fabric_ops for nvmet_port usage, and the associated
struct module ops->owner reference.

This is required in order to support nvmet/configfs-ng
and multiple nvmet_port configfs groups living under
/sys/kernel/config/nvmet/subsystems/$SUBSYS_NQN/ports/

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/core.c  | 32 ++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h |  4 ++++
 2 files changed, 36 insertions(+)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index e0b3f01..689ad4c 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -191,6 +191,38 @@ void nvmet_disable_port(struct nvmet_port *port)
 	module_put(ops->owner);
 }
 
+struct nvmet_fabrics_ops *
+nvmet_get_transport(struct nvmf_disc_rsp_page_entry *disc_addr)
+{
+	struct nvmet_fabrics_ops *ops;
+
+	down_write(&nvmet_config_sem);
+	ops = nvmet_transports[disc_addr->trtype];
+	if (!ops) {
+		pr_err("transport type %d not supported\n",
+			disc_addr->trtype);
+		return ERR_PTR(-EINVAL);
+	}
+
+	if (!try_module_get(ops->owner)) {
+		up_write(&nvmet_config_sem);
+		return ERR_PTR(-EINVAL);
+	}
+	up_write(&nvmet_config_sem);
+
+	return ops;
+}
+
+void nvmet_put_transport(struct nvmf_disc_rsp_page_entry *disc_addr)
+{
+	struct nvmet_fabrics_ops *ops;
+
+	down_write(&nvmet_config_sem);
+	ops = nvmet_transports[disc_addr->trtype];
+	module_put(ops->owner);
+	up_write(&nvmet_config_sem);
+}
+
 static void nvmet_keep_alive_timer(struct work_struct *work)
 {
 	struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work),
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 57dd6d8..17fd217 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -299,6 +299,10 @@ void nvmet_unregister_transport(struct nvmet_fabrics_ops *ops);
 int nvmet_enable_port(struct nvmet_port *port);
 void nvmet_disable_port(struct nvmet_port *port);
 
+struct nvmet_fabrics_ops *nvmet_get_transport(
+		struct nvmf_disc_rsp_page_entry *disc_addr);
+void nvmet_put_transport(struct nvmf_disc_rsp_page_entry *disc_addr);;
+
 void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port);
 void nvmet_referral_disable(struct nvmet_port *port);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 03/11] nvmet: Add support for configfs-ng multi-tenant logic
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 01/11] nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 02/11] nvmet: Add nvmet_fabric_ops get/put transport helpers Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 04/11] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable Nicholas A. Bellinger
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch introduces support for configfs-ng, that allows
for multi-tenant /sys/kernel/config/nvmet/subsystems/$SUBSYS_NQN/
operation, using existing /sys/kernel/config/target/core/
backends from target-core to be configfs symlinked as
per nvme-target subsystem NQN namespaces.

Here's how the layout looks:

/sys/kernel/config/nvmet/
└── subsystems
    └── nqn.2003-01.org.linux-iscsi.NVMf.skylake-ep
        ├── namespaces
        │   └── 1
        │       └── ramdisk0 -> ../../../../../target/core/rd_mcp_1/ramdisk0
        └── ports
            └── loop
                ├── addr_adrfam
                ├── addr_portid
                ├── addr_traddr
                ├── addr_treq
                ├── addr_trsvcid
                ├── addr_trtype
                └── enable

It convert nvmet_find_get_subsys to port_binding_list, and
do the same for nvmet_host_discovery_allowed.

Also convert nvmet_genctr to atomic_long_t, so it can be used
outside of nvmet_config_sem.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/Makefile      |   2 +-
 drivers/nvme/target/configfs-ng.c | 662 ++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/configfs.c    |  12 +-
 drivers/nvme/target/core.c        |  91 ++++--
 drivers/nvme/target/discovery.c   |  31 +-
 drivers/nvme/target/nvmet.h       |  50 ++-
 6 files changed, 812 insertions(+), 36 deletions(-)
 create mode 100644 drivers/nvme/target/configfs-ng.c

diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
index b7a0623..2799e07 100644
--- a/drivers/nvme/target/Makefile
+++ b/drivers/nvme/target/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_NVME_TARGET)		+= nvmet.o
 obj-$(CONFIG_NVME_TARGET_LOOP)		+= nvme-loop.o
 obj-$(CONFIG_NVME_TARGET_RDMA)		+= nvmet-rdma.o
 
-nvmet-y		+= core.o configfs.o admin-cmd.o io-cmd.o fabrics-cmd.o \
+nvmet-y		+= core.o configfs-ng.o admin-cmd.o io-cmd.o fabrics-cmd.o \
 			discovery.o
 nvme-loop-y	+= loop.o
 nvmet-rdma-y	+= rdma.o
diff --git a/drivers/nvme/target/configfs-ng.c b/drivers/nvme/target/configfs-ng.c
new file mode 100644
index 0000000..28dc24b
--- /dev/null
+++ b/drivers/nvme/target/configfs-ng.c
@@ -0,0 +1,662 @@
+/*
+ * Based on target_core_fabric_configfs.c code
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/stat.h>
+#include <linux/ctype.h>
+#include <target/target_core_base.h>
+#include <target/target_core_backend.h>
+
+#include "nvmet.h"
+
+/*
+ * NVMf host CIT
+ */
+static void nvmet_host_release(struct config_item *item)
+{
+	struct nvmet_host *host = to_host(item);
+	struct nvmet_subsys *subsys = host->subsys;
+
+	mutex_lock(&subsys->hosts_mutex);
+	list_del_init(&host->node);
+	mutex_unlock(&subsys->hosts_mutex);
+
+	kfree(host);
+}
+
+static struct configfs_item_operations nvmet_host_item_opts = {
+	.release		= nvmet_host_release,
+};
+
+static struct config_item_type nvmet_host_type = {
+	.ct_item_ops		= &nvmet_host_item_opts,
+	.ct_attrs		= NULL,
+	.ct_owner		= THIS_MODULE,
+
+};
+
+static struct config_group *nvmet_make_hosts(struct config_group *group,
+		const char *name)
+{
+	struct nvmet_subsys *subsys = ports_to_subsys(&group->cg_item);
+	struct nvmet_host *host;
+
+	host = kzalloc(sizeof(*host), GFP_KERNEL);
+	if (!host)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&host->node);
+	host->subsys = subsys;
+
+	mutex_lock(&subsys->hosts_mutex);
+	list_add_tail(&host->node, &subsys->hosts);
+	mutex_unlock(&subsys->hosts_mutex);
+
+	config_group_init_type_name(&host->group, name, &nvmet_host_type);
+
+	return &host->group;
+}
+
+static void nvmet_drop_hosts(struct config_group *group, struct config_item *item)
+{
+	config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_hosts_group_ops = {
+	.make_group		= nvmet_make_hosts,
+	.drop_item		= nvmet_drop_hosts,
+};
+
+static struct config_item_type nvmet_hosts_type = {
+	.ct_group_ops		= &nvmet_hosts_group_ops,
+	.ct_item_ops		= NULL,
+	.ct_attrs		= NULL,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * nvmet_port Generic ConfigFS definitions.
+ */
+static ssize_t nvmet_port_addr_adrfam_show(struct config_item *item,
+		char *page)
+{
+	switch (to_nvmet_port_binding(item)->disc_addr.adrfam) {
+	case NVMF_ADDR_FAMILY_IP4:
+		return sprintf(page, "ipv4\n");
+	case NVMF_ADDR_FAMILY_IP6:
+		return sprintf(page, "ipv6\n");
+	case NVMF_ADDR_FAMILY_IB:
+		return sprintf(page, "ib\n");
+	default:
+		return sprintf(page, "\n");
+	}
+}
+
+static ssize_t nvmet_port_addr_adrfam_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	if (pb->enabled) {
+		pr_err("Cannot modify address while enabled\n");
+		pr_err("Disable the address before modifying\n");
+		return -EACCES;
+	}
+
+	if (sysfs_streq(page, "ipv4")) {
+		pb->disc_addr.adrfam = NVMF_ADDR_FAMILY_IP4;
+	} else if (sysfs_streq(page, "ipv6")) {
+		pb->disc_addr.adrfam = NVMF_ADDR_FAMILY_IP6;
+	} else if (sysfs_streq(page, "ib")) {
+		pb->disc_addr.adrfam = NVMF_ADDR_FAMILY_IB;
+	} else {
+		pr_err("Invalid value '%s' for adrfam\n", page);
+		return -EINVAL;
+	}
+
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_adrfam);
+
+static ssize_t nvmet_port_addr_portid_show(struct config_item *item,
+		char *page)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	return snprintf(page, PAGE_SIZE, "%d\n",
+			le16_to_cpu(pb->disc_addr.portid));
+}
+
+static ssize_t nvmet_port_addr_portid_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+	u16 portid = 0;
+
+	if (kstrtou16(page, 0, &portid)) {
+		pr_err("Invalid value '%s' for portid\n", page);
+		return -EINVAL;
+	}
+
+	if (pb->enabled) {
+		pr_err("Cannot modify address while enabled\n");
+		pr_err("Disable the address before modifying\n");
+		return -EACCES;
+	}
+	pb->disc_addr.portid = cpu_to_le16(portid);
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_portid);
+
+static ssize_t nvmet_port_addr_traddr_show(struct config_item *item,
+		char *page)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	return snprintf(page, PAGE_SIZE, "%s\n",
+			pb->disc_addr.traddr);
+}
+
+static ssize_t nvmet_port_addr_traddr_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	if (count > NVMF_TRADDR_SIZE) {
+		pr_err("Invalid value '%s' for traddr\n", page);
+		return -EINVAL;
+	}
+
+	if (pb->enabled) {
+		pr_err("Cannot modify address while enabled\n");
+		pr_err("Disable the address before modifying\n");
+		return -EACCES;
+	}
+	return snprintf(pb->disc_addr.traddr,
+			sizeof(pb->disc_addr.traddr), "%s", page);
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_traddr);
+
+static ssize_t nvmet_port_addr_treq_show(struct config_item *item,
+		char *page)
+{
+	switch (to_nvmet_port_binding(item)->disc_addr.treq) {
+	case NVMF_TREQ_NOT_SPECIFIED:
+		return sprintf(page, "not specified\n");
+	case NVMF_TREQ_REQUIRED:
+		return sprintf(page, "required\n");
+	case NVMF_TREQ_NOT_REQUIRED:
+		return sprintf(page, "not required\n");
+	default:
+		return sprintf(page, "\n");
+	}
+}
+
+static ssize_t nvmet_port_addr_treq_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	if (pb->enabled) {
+		pr_err("Cannot modify address while enabled\n");
+		pr_err("Disable the address before modifying\n");
+		return -EACCES;
+	}
+
+	if (sysfs_streq(page, "not specified")) {
+		pb->disc_addr.treq = NVMF_TREQ_NOT_SPECIFIED;
+	} else if (sysfs_streq(page, "required")) {
+		pb->disc_addr.treq = NVMF_TREQ_REQUIRED;
+	} else if (sysfs_streq(page, "not required")) {
+		pb->disc_addr.treq = NVMF_TREQ_NOT_REQUIRED;
+	} else {
+		pr_err("Invalid value '%s' for treq\n", page);
+		return -EINVAL;
+	}
+
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_treq);
+
+static ssize_t nvmet_port_addr_trsvcid_show(struct config_item *item,
+		char *page)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	return snprintf(page, PAGE_SIZE, "%s\n",
+			pb->disc_addr.trsvcid);
+}
+
+static ssize_t nvmet_port_addr_trsvcid_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	if (count > NVMF_TRSVCID_SIZE) {
+		pr_err("Invalid value '%s' for trsvcid\n", page);
+		return -EINVAL;
+	}
+	if (pb->enabled) {
+		pr_err("Cannot modify address while enabled\n");
+		pr_err("Disable the address before modifying\n");
+		return -EACCES;
+	}
+	return snprintf(pb->disc_addr.trsvcid,
+			sizeof(pb->disc_addr.trsvcid), "%s", page);
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_trsvcid);
+
+static ssize_t nvmet_port_addr_trtype_show(struct config_item *item,
+		char *page)
+{
+	switch (to_nvmet_port_binding(item)->disc_addr.trtype) {
+	case NVMF_TRTYPE_RDMA:
+		return sprintf(page, "rdma\n");
+	case NVMF_TRTYPE_LOOP:
+		return sprintf(page, "loop\n");
+	default:
+		return sprintf(page, "\n");
+	}
+}
+
+static void nvmet_port_init_tsas_rdma(struct nvmet_port_binding *pb)
+{
+	pb->disc_addr.trtype = NVMF_TRTYPE_RDMA;
+	memset(&pb->disc_addr.tsas.rdma, 0, NVMF_TSAS_SIZE);
+	pb->disc_addr.tsas.rdma.qptype = NVMF_RDMA_QPTYPE_CONNECTED;
+	pb->disc_addr.tsas.rdma.prtype = NVMF_RDMA_PRTYPE_NOT_SPECIFIED;
+	pb->disc_addr.tsas.rdma.cms = NVMF_RDMA_CMS_RDMA_CM;
+}
+
+static void nvmet_port_init_tsas_loop(struct nvmet_port_binding *pb)
+{
+	pb->disc_addr.trtype = NVMF_TRTYPE_LOOP;
+	memset(&pb->disc_addr.tsas, 0, NVMF_TSAS_SIZE);
+}
+
+static ssize_t nvmet_port_addr_trtype_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	if (pb->enabled) {
+		pr_err("Cannot modify address while enabled\n");
+		pr_err("Disable the address before modifying\n");
+		return -EACCES;
+	}
+
+	if (sysfs_streq(page, "rdma")) {
+		nvmet_port_init_tsas_rdma(pb);
+	} else if (sysfs_streq(page, "loop")) {
+		nvmet_port_init_tsas_loop(pb);
+	} else {
+		pr_err("Invalid value '%s' for trtype\n", page);
+		return -EINVAL;
+	}
+
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, addr_trtype);
+
+static void nvmet_port_disable(struct nvmet_port_binding *pb)
+{
+	struct nvmet_fabrics_ops *ops = pb->nf_ops;
+	struct nvmet_port *port = pb->port;
+
+	if (!ops || !port)
+		return;
+
+	ops->remove_port(pb);
+	nvmet_put_transport(&pb->disc_addr);
+	pb->nf_ops = NULL;
+
+	atomic64_inc(&nvmet_genctr);
+}
+
+static ssize_t nvmet_port_enable_show(struct config_item *item, char *page)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	return sprintf(page, "%d\n", pb->enabled);
+}
+
+static ssize_t nvmet_port_enable_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port *port;
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+	struct nvmet_fabrics_ops *ops;
+	bool enable;
+	int rc;
+
+	if (strtobool(page, &enable))
+		return -EINVAL;
+
+	if (enable) {
+		if (pb->enabled) {
+			pr_warn("port already enabled: %d\n",
+				pb->disc_addr.trtype);
+			goto out;
+		}
+
+		ops = nvmet_get_transport(&pb->disc_addr);
+		if (IS_ERR(ops))
+			return PTR_ERR(ops);
+
+		pb->nf_ops = ops;
+
+		rc = ops->add_port(pb);
+		if (rc) {
+			nvmet_put_transport(&pb->disc_addr);
+			return rc;
+		}
+
+		atomic64_inc(&nvmet_genctr);
+	} else {
+		if (!pb->nf_ops)
+			return -EINVAL;
+
+		port = pb->port;
+		if (!port)
+			return -EINVAL;
+
+		nvmet_port_disable(pb);
+	}
+out:
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_port_, enable);
+
+static struct configfs_attribute *nvmet_port_attrs[] = {
+	&nvmet_port_attr_addr_adrfam,
+	&nvmet_port_attr_addr_portid,
+	&nvmet_port_attr_addr_traddr,
+	&nvmet_port_attr_addr_treq,
+	&nvmet_port_attr_addr_trsvcid,
+	&nvmet_port_attr_addr_trtype,
+	&nvmet_port_attr_enable,
+	NULL,
+};
+
+/*
+ * NVMf transport port CIT
+ */
+static void nvmet_port_release(struct config_item *item)
+{
+	struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
+
+	nvmet_port_disable(pb);
+	kfree(pb);
+}
+
+static struct configfs_item_operations nvmet_port_item_ops = {
+	.release	= nvmet_port_release,
+};
+
+static struct config_item_type nvmet_port_type = {
+	.ct_item_ops		= &nvmet_port_item_ops,
+	.ct_attrs		= nvmet_port_attrs,
+	.ct_owner		= THIS_MODULE,
+};
+
+static struct config_group *nvmet_make_ports(struct config_group *group,
+		const char *name)
+{
+	struct nvmet_subsys *subsys = ports_to_subsys(&group->cg_item);
+	struct nvmet_port_binding *pb;
+
+	printk("Entering nvmet_make_port %s >>>>>>>>>>>>>>>>>>\n", name);
+
+	pb = kzalloc(sizeof(*pb), GFP_KERNEL);
+	if (!pb)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&pb->node);
+	pb->nf_subsys = subsys;
+
+	config_group_init_type_name(&pb->group, name, &nvmet_port_type);
+
+	return &pb->group;
+}
+
+static void nvmet_drop_ports(struct config_group *group, struct config_item *item)
+{
+	config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_ports_group_ops = {
+	.make_group		= nvmet_make_ports,
+	.drop_item		= nvmet_drop_ports,
+};
+
+static struct config_item_type nvmet_ports_type = {
+	.ct_group_ops		= &nvmet_ports_group_ops,
+	.ct_item_ops		= NULL,
+	.ct_attrs		= NULL,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * NVMf namespace <-> /sys/kernel/config/target/core/ backend configfs symlink
+ */
+static int nvmet_ns_link(struct config_item *ns_ci, struct config_item *dev_ci)
+{
+	struct nvmet_ns *ns = to_nvmet_ns(ns_ci);
+	struct se_device *dev =
+		container_of(to_config_group(dev_ci), struct se_device, dev_group);
+
+	if (dev->dev_link_magic != SE_DEV_LINK_MAGIC) {
+		pr_err("Bad dev->dev_link_magic, not a valid se_dev_ci pointer:"
+		       " %p to struct se_device: %p\n", dev_ci, dev);
+		return -EFAULT;
+	}
+
+	if (!(dev->dev_flags & DF_CONFIGURED)) {
+		pr_err("se_device not configured yet, cannot namespace link\n");
+		return -ENODEV;
+	}
+
+	if (!dev->transport->sbc_ops) {
+		pr_err("se_device does not have sbc_ops, cannot namespace link\n");
+		return -ENOSYS;
+	}
+
+	// XXX: Pass in struct se_device into nvmet_ns_enable
+	return nvmet_ns_enable(ns);
+}
+
+static int nvmet_ns_unlink(struct config_item *ns_ci, struct config_item *dev_ci)
+{
+	struct nvmet_ns *ns = to_nvmet_ns(ns_ci);
+
+	nvmet_ns_disable(ns);
+	return 0;
+}
+
+static void nvmet_ns_release(struct config_item *item)
+{
+	struct nvmet_ns *ns = to_nvmet_ns(item);
+
+	nvmet_ns_free(ns);
+}
+
+static struct configfs_item_operations nvmet_ns_item_ops = {
+	.release		= nvmet_ns_release,
+	.allow_link		= nvmet_ns_link,
+	.drop_link		= nvmet_ns_unlink,
+};
+
+static struct config_item_type nvmet_ns_type = {
+	.ct_item_ops		= &nvmet_ns_item_ops,
+	.ct_attrs		= NULL,
+	.ct_owner		= THIS_MODULE,
+};
+
+static struct config_group *nvmet_make_namespace(struct config_group *group,
+		const char *name)
+{
+	struct nvmet_subsys *subsys = namespaces_to_subsys(&group->cg_item);
+	struct nvmet_ns *ns;
+	int ret;
+	u32 nsid;
+
+	ret = kstrtou32(name, 0, &nsid);
+	if (ret)
+		goto out;
+
+	ret = -EINVAL;
+	if (nsid == 0 || nsid == 0xffffffff)
+		goto out;
+
+	ret = -ENOMEM;
+	ns = nvmet_ns_alloc(subsys, nsid);
+	if (!ns)
+		goto out;
+	config_group_init_type_name(&ns->group, name, &nvmet_ns_type);
+
+	pr_info("adding nsid %d to subsystem %s\n", nsid, subsys->subsysnqn);
+
+	return &ns->group;
+out:
+	return ERR_PTR(ret);
+}
+
+static void nvmet_drop_namespace(struct config_group *group, struct config_item *item)
+{
+	/*
+	 * struct nvmet_ns is released via nvmet_ns_release()
+	 */
+	config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_namespaces_group_ops = {
+	.make_group		= nvmet_make_namespace,
+	.drop_item		= nvmet_drop_namespace,
+};
+
+static struct config_item_type nvmet_namespaces_type = {
+	.ct_group_ops		= &nvmet_namespaces_group_ops,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * Subsystem structures & folder operation functions below
+ */
+static void nvmet_subsys_release(struct config_item *item)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+
+	nvmet_subsys_put(subsys);
+}
+
+static struct configfs_item_operations nvmet_subsys_item_ops = {
+	.release		= nvmet_subsys_release,
+};
+
+static struct config_item_type nvmet_subsys_type = {
+	.ct_item_ops		= &nvmet_subsys_item_ops,
+//	.ct_attrs		= nvmet_subsys_attrs,
+	.ct_owner		= THIS_MODULE,
+};
+
+static struct config_group *nvmet_make_subsys(struct config_group *group,
+		const char *name)
+{
+	struct nvmet_subsys *subsys;
+
+	if (sysfs_streq(name, NVME_DISC_SUBSYS_NAME)) {
+		pr_err("can't create discovery subsystem through configfs\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	subsys = nvmet_subsys_alloc(name, NVME_NQN_NVME);
+	if (!subsys)
+		return ERR_PTR(-ENOMEM);
+
+	config_group_init_type_name(&subsys->group, name, &nvmet_subsys_type);
+
+	config_group_init_type_name(&subsys->namespaces_group,
+			"namespaces", &nvmet_namespaces_type);
+	configfs_add_default_group(&subsys->namespaces_group, &subsys->group);
+
+	config_group_init_type_name(&subsys->ports_group,
+			"ports", &nvmet_ports_type);
+	configfs_add_default_group(&subsys->ports_group, &subsys->group);
+
+	config_group_init_type_name(&subsys->hosts_group,
+			"hosts", &nvmet_hosts_type);
+	configfs_add_default_group(&subsys->hosts_group, &subsys->group);
+
+//	XXX: subsys->allow_any_host hardcoded to true
+	subsys->allow_any_host = true;
+
+	return &subsys->group;
+}
+
+static void nvmet_drop_subsys(struct config_group *group, struct config_item *item)
+{
+	/*
+	 * struct nvmet_port is releated via nvmet_subsys_release()
+	 */
+	config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_subsystems_group_ops = {
+	.make_group		= nvmet_make_subsys,
+	.drop_item		= nvmet_drop_subsys,
+};
+
+static struct config_item_type nvmet_subsystems_type = {
+	.ct_group_ops		= &nvmet_subsystems_group_ops,
+	.ct_owner		= THIS_MODULE,
+};
+
+static struct config_group nvmet_subsystems_group;
+
+static struct config_item_type nvmet_root_type = {
+	.ct_owner		= THIS_MODULE,
+};
+
+static struct configfs_subsystem nvmet_configfs_subsystem = {
+	.su_group = {
+		.cg_item = {
+			.ci_namebuf	= "nvmet",
+			.ci_type	= &nvmet_root_type,
+		},
+	},
+};
+
+int __init nvmet_init_configfs(void)
+{
+	int ret;
+
+	config_group_init(&nvmet_configfs_subsystem.su_group);
+	mutex_init(&nvmet_configfs_subsystem.su_mutex);
+
+	config_group_init_type_name(&nvmet_subsystems_group,
+			"subsystems", &nvmet_subsystems_type);
+	configfs_add_default_group(&nvmet_subsystems_group,
+			&nvmet_configfs_subsystem.su_group);
+
+	ret = configfs_register_subsystem(&nvmet_configfs_subsystem);
+	if (ret) {
+		pr_err("configfs_register_subsystem: %d\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+void __exit nvmet_exit_configfs(void)
+{
+	configfs_unregister_subsystem(&nvmet_configfs_subsystem);
+}
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index aebe646..b30896a 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -441,7 +441,9 @@ static int nvmet_port_subsys_allow_link(struct config_item *parent,
 	if (!link)
 		return -ENOMEM;
 	link->subsys = subsys;
-
+#if 1
+	BUG_ON(1);
+#else
 	down_write(&nvmet_config_sem);
 	ret = -EEXIST;
 	list_for_each_entry(p, &port->subsystems, entry) {
@@ -458,6 +460,7 @@ static int nvmet_port_subsys_allow_link(struct config_item *parent,
 	list_add_tail(&link->entry, &port->subsystems);
 	nvmet_genctr++;
 	up_write(&nvmet_config_sem);
+#endif
 	return 0;
 
 out_free_link:
@@ -469,6 +472,7 @@ out_free_link:
 static int nvmet_port_subsys_drop_link(struct config_item *parent,
 		struct config_item *target)
 {
+#if 0
 	struct nvmet_port *port = to_nvmet_port(parent->ci_parent);
 	struct nvmet_subsys *subsys = to_subsys(target);
 	struct nvmet_subsys_link *p;
@@ -487,7 +491,9 @@ found:
 	if (list_empty(&port->subsystems))
 		nvmet_disable_port(port);
 	up_write(&nvmet_config_sem);
+
 	kfree(p);
+#endif
 	return 0;
 }
 
@@ -504,6 +510,7 @@ static struct config_item_type nvmet_port_subsys_type = {
 static int nvmet_allowed_hosts_allow_link(struct config_item *parent,
 		struct config_item *target)
 {
+#if 0
 	struct nvmet_subsys *subsys = to_subsys(parent->ci_parent);
 	struct nvmet_host *host;
 	struct nvmet_host_link *link, *p;
@@ -540,11 +547,13 @@ out_free_link:
 	up_write(&nvmet_config_sem);
 	kfree(link);
 	return ret;
+#endif
 }
 
 static int nvmet_allowed_hosts_drop_link(struct config_item *parent,
 		struct config_item *target)
 {
+#if 0
 	struct nvmet_subsys *subsys = to_subsys(parent->ci_parent);
 	struct nvmet_host *host = to_host(target);
 	struct nvmet_host_link *p;
@@ -563,6 +572,7 @@ found:
 	up_write(&nvmet_config_sem);
 	kfree(p);
 	return 0;
+#endif
 }
 
 static struct configfs_item_operations nvmet_allowed_hosts_item_ops = {
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 689ad4c..3357696 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -147,6 +147,7 @@ EXPORT_SYMBOL_GPL(nvmet_unregister_transport);
 
 int nvmet_enable_port(struct nvmet_port *port)
 {
+#if 0
 	struct nvmet_fabrics_ops *ops;
 	int ret;
 
@@ -175,11 +176,13 @@ int nvmet_enable_port(struct nvmet_port *port)
 	}
 
 	port->enabled = true;
+#endif
 	return 0;
 }
 
 void nvmet_disable_port(struct nvmet_port *port)
 {
+#if 0
 	struct nvmet_fabrics_ops *ops;
 
 	lockdep_assert_held(&nvmet_config_sem);
@@ -189,6 +192,7 @@ void nvmet_disable_port(struct nvmet_port *port)
 	ops = nvmet_transports[port->disc_addr.trtype];
 	ops->remove_port(port);
 	module_put(ops->owner);
+#endif
 }
 
 struct nvmet_fabrics_ops *
@@ -681,15 +685,19 @@ out:
 static bool __nvmet_host_allowed(struct nvmet_subsys *subsys,
 		const char *hostnqn)
 {
-	struct nvmet_host_link *p;
+	struct nvmet_host *h;
 
 	if (subsys->allow_any_host)
 		return true;
 
-	list_for_each_entry(p, &subsys->hosts, entry) {
-		if (!strcmp(nvmet_host_name(p->host), hostnqn))
+	mutex_lock(&subsys->hosts_mutex);
+	list_for_each_entry(h, &subsys->hosts, node) {
+		if (!strcmp(nvmet_host_name(h), hostnqn)) {
+			mutex_unlock(&subsys->hosts_mutex);
 			return true;
+		}
 	}
+	mutex_unlock(&subsys->hosts_mutex);
 
 	return false;
 }
@@ -697,10 +705,21 @@ static bool __nvmet_host_allowed(struct nvmet_subsys *subsys,
 static bool nvmet_host_discovery_allowed(struct nvmet_req *req,
 		const char *hostnqn)
 {
-	struct nvmet_subsys_link *s;
+	struct nvmet_port_binding *pb;
+	struct nvmet_port *port = req->port;
+	struct nvmet_subsys *subsys;
+
+	if (!port)
+		return false;
+
+	lockdep_assert_held(&port->port_binding_mutex);
+
+	list_for_each_entry(pb, &port->port_binding_list, node) {
+		subsys = pb->nf_subsys;
+		if (!subsys)
+			continue;
 
-	list_for_each_entry(s, &req->port->subsystems, entry) {
-		if (__nvmet_host_allowed(s->subsys, hostnqn))
+		if (__nvmet_host_allowed(subsys, hostnqn))
 			return true;
 	}
 
@@ -710,8 +729,6 @@ static bool nvmet_host_discovery_allowed(struct nvmet_req *req,
 bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys,
 		const char *hostnqn)
 {
-	lockdep_assert_held(&nvmet_config_sem);
-
 	if (subsys->type == NVME_NQN_DISC)
 		return nvmet_host_discovery_allowed(req, hostnqn);
 	else
@@ -721,13 +738,14 @@ bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys,
 u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
 		struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
 {
+	struct nvmet_port *port = req->port;
 	struct nvmet_subsys *subsys;
 	struct nvmet_ctrl *ctrl;
 	int ret;
 	u16 status;
 
 	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
-	subsys = nvmet_find_get_subsys(req->port, subsysnqn);
+	subsys = nvmet_find_get_subsys(port, subsysnqn);
 	if (!subsys) {
 		pr_warn("connect request for invalid subsystem %s!\n",
 			subsysnqn);
@@ -736,15 +754,16 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
 	}
 
 	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
-	down_read(&nvmet_config_sem);
+
+	mutex_lock(&port->port_binding_mutex);
 	if (!nvmet_host_allowed(req, subsys, hostnqn)) {
 		pr_info("connect by host %s for subsystem %s not allowed\n",
 			hostnqn, subsysnqn);
 		req->rsp->result = IPO_IATTR_CONNECT_DATA(hostnqn);
-		up_read(&nvmet_config_sem);
+		mutex_unlock(&port->port_binding_mutex);
 		goto out_put_subsystem;
 	}
-	up_read(&nvmet_config_sem);
+	mutex_unlock(&port->port_binding_mutex);
 
 	status = NVME_SC_INTERNAL;
 	ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
@@ -872,10 +891,29 @@ void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvmet_ctrl_fatal_error);
 
+void nvmet_port_binding_enable(struct nvmet_port_binding *pb, struct nvmet_port *port)
+{
+	mutex_lock(&port->port_binding_mutex);
+	pb->enabled = true;
+	list_add_tail(&pb->node, &port->port_binding_list);
+	mutex_unlock(&port->port_binding_mutex);
+}
+EXPORT_SYMBOL_GPL(nvmet_port_binding_enable);
+
+void nvmet_port_binding_disable(struct nvmet_port_binding *pb, struct nvmet_port *port)
+{
+	mutex_lock(&port->port_binding_mutex);
+	pb->enabled = false;
+	list_del_init(&pb->node);
+	mutex_unlock(&port->port_binding_mutex);
+}
+EXPORT_SYMBOL_GPL(nvmet_port_binding_disable);
+
 static struct nvmet_subsys *nvmet_find_get_subsys(struct nvmet_port *port,
 		const char *subsysnqn)
 {
-	struct nvmet_subsys_link *p;
+	struct nvmet_port_binding *pb;
+	struct nvmet_subsys *subsys;
 
 	if (!port)
 		return NULL;
@@ -887,17 +925,22 @@ static struct nvmet_subsys *nvmet_find_get_subsys(struct nvmet_port *port,
 		return nvmet_disc_subsys;
 	}
 
-	down_read(&nvmet_config_sem);
-	list_for_each_entry(p, &port->subsystems, entry) {
-		if (!strncmp(p->subsys->subsysnqn, subsysnqn,
-				NVMF_NQN_SIZE)) {
-			if (!kref_get_unless_zero(&p->subsys->ref))
-				break;
-			up_read(&nvmet_config_sem);
-			return p->subsys;
+	mutex_lock(&port->port_binding_mutex);
+	list_for_each_entry(pb, &port->port_binding_list, node) {
+		subsys = pb->nf_subsys;
+		if (!subsys)
+			continue;
+
+		if (strcmp(subsys->subsysnqn, subsysnqn))
+			continue;
+
+		if (kref_get_unless_zero(&subsys->ref)) {	
+			mutex_unlock(&port->port_binding_mutex);
+			return subsys;
 		}
 	}
-	up_read(&nvmet_config_sem);
+	mutex_unlock(&port->port_binding_mutex);
+
 	return NULL;
 }
 
@@ -935,13 +978,13 @@ struct nvmet_subsys *nvmet_subsys_alloc(const char *subsysnqn,
 	kref_init(&subsys->ref);
 
 	mutex_init(&subsys->lock);
+	mutex_init(&subsys->hosts_mutex);
 	INIT_LIST_HEAD(&subsys->namespaces);
 	INIT_LIST_HEAD(&subsys->ctrls);
+	INIT_LIST_HEAD(&subsys->hosts);
 
 	ida_init(&subsys->cntlid_ida);
 
-	INIT_LIST_HEAD(&subsys->hosts);
-
 	return subsys;
 }
 
diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
index 6f65646..32dc05c 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -18,7 +18,7 @@
 
 struct nvmet_subsys *nvmet_disc_subsys;
 
-u64 nvmet_genctr;
+atomic_long_t nvmet_genctr;
 
 void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port)
 {
@@ -26,7 +26,7 @@ void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port)
 	if (list_empty(&port->entry)) {
 		list_add_tail(&port->entry, &parent->referrals);
 		port->enabled = true;
-		nvmet_genctr++;
+		atomic64_inc(&nvmet_genctr);
 	}
 	up_write(&nvmet_config_sem);
 }
@@ -37,7 +37,7 @@ void nvmet_referral_disable(struct nvmet_port *port)
 	if (!list_empty(&port->entry)) {
 		port->enabled = false;
 		list_del_init(&port->entry);
-		nvmet_genctr++;
+		atomic64_inc(&nvmet_genctr);
 	}
 	up_write(&nvmet_config_sem);
 }
@@ -69,8 +69,8 @@ static void nvmet_execute_get_disc_log_page(struct nvmet_req *req)
 	size_t data_len = nvmet_get_log_page_len(req->cmd);
 	size_t alloc_len = max(data_len, sizeof(*hdr));
 	int residual_len = data_len - sizeof(*hdr);
-	struct nvmet_subsys_link *p;
-	struct nvmet_port *r;
+	struct nvmet_port *port = req->port;
+	struct nvmet_port_binding *pb;
 	u32 numrec = 0;
 	u16 status = 0;
 
@@ -84,7 +84,7 @@ static void nvmet_execute_get_disc_log_page(struct nvmet_req *req)
 		status = NVME_SC_INTERNAL;
 		goto out;
 	}
-
+#if 0
 	down_read(&nvmet_config_sem);
 	list_for_each_entry(p, &req->port->subsystems, entry) {
 		if (!nvmet_host_allowed(req, p->subsys, ctrl->hostnqn))
@@ -113,7 +113,26 @@ static void nvmet_execute_get_disc_log_page(struct nvmet_req *req)
 	hdr->recfmt = cpu_to_le16(0);
 
 	up_read(&nvmet_config_sem);
+#else
+	mutex_lock(&port->port_binding_mutex);
+	list_for_each_entry(pb, &port->port_binding_list, node) {
+		if (!nvmet_host_allowed(req, pb->nf_subsys, ctrl->hostnqn))
+			continue;
+
+		if (residual_len >= entry_size) {
+			nvmet_format_discovery_entry(hdr, port,
+					pb->nf_subsys->subsysnqn,
+					NVME_NQN_NVME, numrec);
+			residual_len -= entry_size;
+		}
+		numrec++;
+	}
+	hdr->genctr = cpu_to_le64(atomic64_read(&nvmet_genctr));
+	hdr->numrec = cpu_to_le64(numrec);
+	hdr->recfmt = cpu_to_le16(0);
 
+	mutex_unlock(&port->port_binding_mutex);
+#endif
 	status = nvmet_copy_to_sgl(req, 0, hdr, data_len);
 	kfree(hdr);
 out:
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 17fd217..265f56f 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -79,6 +79,8 @@ struct nvmet_sq {
 	struct completion	free_done;
 };
 
+struct nvmet_port_binding;
+
 /**
  * struct nvmet_port -	Common structure to keep port
  *				information for the target.
@@ -98,6 +100,25 @@ struct nvmet_port {
 	struct list_head		referrals;
 	void				*priv;
 	bool				enabled;
+
+	struct nvmet_subsys		*nf_subsys;
+	struct nvmet_fabrics_ops	*nf_ops;
+
+	struct mutex			port_binding_mutex;
+	struct list_head		port_binding_list;
+};
+
+struct nvmet_port_binding {
+	bool				enabled;
+	struct nvmf_disc_rsp_page_entry	disc_addr;
+
+	struct nvmet_port		*port;
+	struct nvmet_subsys		*nf_subsys;
+	struct nvmet_fabrics_ops	*nf_ops;
+
+	struct list_head		node;
+	struct list_head		subsys_node;
+	struct config_group		group;
 };
 
 static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
@@ -106,6 +127,13 @@ static inline struct nvmet_port *to_nvmet_port(struct config_item *item)
 			group);
 }
 
+static inline struct nvmet_port_binding *
+to_nvmet_port_binding(struct config_item *item)
+{
+	return container_of(to_config_group(item), struct nvmet_port_binding,
+			group);
+}
+
 struct nvmet_ctrl {
 	struct nvmet_subsys	*subsys;
 	struct nvmet_cq		**cqs;
@@ -147,6 +175,7 @@ struct nvmet_subsys {
 	struct list_head	ctrls;
 	struct ida		cntlid_ida;
 
+	struct mutex		hosts_mutex;
 	struct list_head	hosts;
 	bool			allow_any_host;
 
@@ -158,7 +187,8 @@ struct nvmet_subsys {
 	struct config_group	group;
 
 	struct config_group	namespaces_group;
-	struct config_group	allowed_hosts_group;
+	struct config_group	ports_group;
+	struct config_group	hosts_group;
 };
 
 static inline struct nvmet_subsys *to_subsys(struct config_item *item)
@@ -173,7 +203,17 @@ static inline struct nvmet_subsys *namespaces_to_subsys(
 			namespaces_group);
 }
 
+static inline struct nvmet_subsys *ports_to_subsys(
+		struct config_item *item)
+{
+	return container_of(to_config_group(item), struct nvmet_subsys,
+			ports_group);
+}
+
 struct nvmet_host {
+	struct nvmet_subsys	*subsys;
+
+	struct list_head	node;
 	struct config_group	group;
 };
 
@@ -205,8 +245,8 @@ struct nvmet_fabrics_ops {
 	unsigned int msdbd;
 	bool has_keyed_sgls : 1;
 	void (*queue_response)(struct nvmet_req *req);
-	int (*add_port)(struct nvmet_port *port);
-	void (*remove_port)(struct nvmet_port *port);
+	int (*add_port)(struct nvmet_port_binding *pb);
+	void (*remove_port)(struct nvmet_port_binding *pb);
 	void (*delete_ctrl)(struct nvmet_ctrl *ctrl);
 };
 
@@ -274,6 +314,8 @@ void nvmet_sq_destroy(struct nvmet_sq *sq);
 int nvmet_sq_init(struct nvmet_sq *sq);
 
 void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl);
+void nvmet_port_binding_enable(struct nvmet_port_binding *pb, struct nvmet_port *port);
+void nvmet_port_binding_disable(struct nvmet_port_binding *pb, struct nvmet_port *port);
 
 void nvmet_update_cc(struct nvmet_ctrl *ctrl, u32 new);
 u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
@@ -326,7 +368,7 @@ int __init nvmet_init_discovery(void);
 void nvmet_exit_discovery(void);
 
 extern struct nvmet_subsys *nvmet_disc_subsys;
-extern u64 nvmet_genctr;
+extern atomic_long_t nvmet_genctr;
 extern struct rw_semaphore nvmet_config_sem;
 
 bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys,
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 04/11] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (2 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 03/11] nvmet: Add support for configfs-ng multi-tenant logic Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 05/11] nvmet/loop: Add support for controller-per-port model + nvmet_port_binding Nicholas A. Bellinger
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch hooks up nvmet_ns_enable() to accept the RCU protected
struct se_device provided as a configfs symlink from existing
/sys/kernel/config/target/core/ driver backends.

Also, drop the now unused internal ns->bdev + ns->device_path
usage, and add WIP stubs for nvmet/io-cmd sbc_ops backend
conversion to be added in subsequent patches.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/configfs-ng.c |  3 +--
 drivers/nvme/target/core.c        | 30 ++++++++----------------------
 drivers/nvme/target/io-cmd.c      | 17 +++++++++++++++--
 drivers/nvme/target/nvmet.h       |  6 ++----
 4 files changed, 26 insertions(+), 30 deletions(-)

diff --git a/drivers/nvme/target/configfs-ng.c b/drivers/nvme/target/configfs-ng.c
index 28dc24b..1cd1e8e 100644
--- a/drivers/nvme/target/configfs-ng.c
+++ b/drivers/nvme/target/configfs-ng.c
@@ -470,8 +470,7 @@ static int nvmet_ns_link(struct config_item *ns_ci, struct config_item *dev_ci)
 		return -ENOSYS;
 	}
 
-	// XXX: Pass in struct se_device into nvmet_ns_enable
-	return nvmet_ns_enable(ns);
+	return nvmet_ns_enable(ns, dev);
 }
 
 static int nvmet_ns_unlink(struct config_item *ns_ci, struct config_item *dev_ci)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 3357696..e2176e0 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -13,6 +13,8 @@
  */
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 #include <linux/module.h>
+#include <target/target_core_base.h>
+#include <target/target_core_backend.h>
 #include "nvmet.h"
 
 static struct nvmet_fabrics_ops *nvmet_transports[NVMF_TRTYPE_MAX];
@@ -292,7 +294,7 @@ void nvmet_put_namespace(struct nvmet_ns *ns)
 	percpu_ref_put(&ns->ref);
 }
 
-int nvmet_ns_enable(struct nvmet_ns *ns)
+int nvmet_ns_enable(struct nvmet_ns *ns, struct se_device *dev)
 {
 	struct nvmet_subsys *subsys = ns->subsys;
 	struct nvmet_ctrl *ctrl;
@@ -302,23 +304,14 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
 	if (!list_empty(&ns->dev_link))
 		goto out_unlock;
 
-	ns->bdev = blkdev_get_by_path(ns->device_path, FMODE_READ | FMODE_WRITE,
-			NULL);
-	if (IS_ERR(ns->bdev)) {
-		pr_err("nvmet: failed to open block device %s: (%ld)\n",
-			ns->device_path, PTR_ERR(ns->bdev));
-		ret = PTR_ERR(ns->bdev);
-		ns->bdev = NULL;
-		goto out_unlock;
-	}
-
-	ns->size = i_size_read(ns->bdev->bd_inode);
-	ns->blksize_shift = blksize_bits(bdev_logical_block_size(ns->bdev));
+	rcu_assign_pointer(ns->dev, dev);
+	ns->size = dev->transport->get_blocks(dev) * dev->dev_attrib.hw_block_size;
+	ns->blksize_shift = blksize_bits(dev->dev_attrib.hw_block_size);
 
 	ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace,
 				0, GFP_KERNEL);
 	if (ret)
-		goto out_blkdev_put;
+		goto out_unlock;
 
 	if (ns->nsid > subsys->max_nsid)
 		subsys->max_nsid = ns->nsid;
@@ -348,10 +341,6 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
 out_unlock:
 	mutex_unlock(&subsys->lock);
 	return ret;
-out_blkdev_put:
-	blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
-	ns->bdev = NULL;
-	goto out_unlock;
 }
 
 void nvmet_ns_disable(struct nvmet_ns *ns)
@@ -384,16 +373,13 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
 	list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
 		nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, 0, 0);
 
-	if (ns->bdev)
-		blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
+	rcu_assign_pointer(ns->dev, NULL);
 	mutex_unlock(&subsys->lock);
 }
 
 void nvmet_ns_free(struct nvmet_ns *ns)
 {
 	nvmet_ns_disable(ns);
-
-	kfree(ns->device_path);
 	kfree(ns);
 }
 
diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 76dbf73..38c2e97 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -16,6 +16,7 @@
 #include <linux/module.h>
 #include "nvmet.h"
 
+#if 0
 static void nvmet_bio_done(struct bio *bio)
 {
 	struct nvmet_req *req = bio->bi_private;
@@ -26,6 +27,7 @@ static void nvmet_bio_done(struct bio *bio)
 	if (bio != &req->inline_bio)
 		bio_put(bio);
 }
+#endif
 
 static inline u32 nvmet_rw_len(struct nvmet_req *req)
 {
@@ -33,6 +35,7 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req)
 			req->ns->blksize_shift;
 }
 
+#if 0
 static void nvmet_inline_bio_init(struct nvmet_req *req)
 {
 	struct bio *bio = &req->inline_bio;
@@ -41,21 +44,23 @@ static void nvmet_inline_bio_init(struct nvmet_req *req)
 	bio->bi_max_vecs = NVMET_MAX_INLINE_BIOVEC;
 	bio->bi_io_vec = req->inline_bvec;
 }
+#endif
 
 static void nvmet_execute_rw(struct nvmet_req *req)
 {
+#if 0
 	int sg_cnt = req->sg_cnt;
 	struct scatterlist *sg;
 	struct bio *bio;
 	sector_t sector;
 	blk_qc_t cookie;
 	int rw, i;
-
+#endif
 	if (!req->sg_cnt) {
 		nvmet_req_complete(req, 0);
 		return;
 	}
-
+#if 0
 	if (req->cmd->rw.opcode == nvme_cmd_write) {
 		if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
 			rw = WRITE_FUA;
@@ -95,10 +100,12 @@ static void nvmet_execute_rw(struct nvmet_req *req)
 	cookie = submit_bio(rw, bio);
 
 	blk_poll(bdev_get_queue(req->ns->bdev), cookie);
+#endif
 }
 
 static void nvmet_execute_flush(struct nvmet_req *req)
 {
+#if 0
 	struct bio *bio;
 
 	nvmet_inline_bio_init(req);
@@ -109,8 +116,10 @@ static void nvmet_execute_flush(struct nvmet_req *req)
 	bio->bi_end_io = nvmet_bio_done;
 
 	submit_bio(WRITE_FLUSH, bio);
+#endif
 }
 
+#if 0
 static u16 nvmet_discard_range(struct nvmet_ns *ns,
 		struct nvme_dsm_range *range, int type, struct bio **bio)
 {
@@ -119,11 +128,14 @@ static u16 nvmet_discard_range(struct nvmet_ns *ns,
 			le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
 			GFP_KERNEL, type, bio))
 		return NVME_SC_INTERNAL | NVME_SC_DNR;
+
 	return 0;
 }
+#endif
 
 static void nvmet_execute_discard(struct nvmet_req *req)
 {
+#if 0
 	struct nvme_dsm_range range;
 	struct bio *bio = NULL;
 	int type = REQ_WRITE | REQ_DISCARD, i;
@@ -152,6 +164,7 @@ static void nvmet_execute_discard(struct nvmet_req *req)
 	} else {
 		nvmet_req_complete(req, status);
 	}
+#endif
 }
 
 static void nvmet_execute_dsm(struct nvmet_req *req)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 265f56f..af616d0 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -41,15 +41,13 @@
 struct nvmet_ns {
 	struct list_head	dev_link;
 	struct percpu_ref	ref;
-	struct block_device	*bdev;
+	struct se_device __rcu	*dev;
 	u32			nsid;
 	u32			blksize_shift;
 	loff_t			size;
 	u8			nguid[16];
 
 	struct nvmet_subsys	*subsys;
-	const char		*device_path;
-
 	struct config_group	device_group;
 	struct config_group	group;
 
@@ -330,7 +328,7 @@ void nvmet_subsys_put(struct nvmet_subsys *subsys);
 
 struct nvmet_ns *nvmet_find_namespace(struct nvmet_ctrl *ctrl, __le32 nsid);
 void nvmet_put_namespace(struct nvmet_ns *ns);
-int nvmet_ns_enable(struct nvmet_ns *ns);
+int nvmet_ns_enable(struct nvmet_ns *ns, struct se_device *dev);
 void nvmet_ns_disable(struct nvmet_ns *ns);
 struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid);
 void nvmet_ns_free(struct nvmet_ns *ns);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 05/11] nvmet/loop: Add support for controller-per-port model + nvmet_port_binding
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (3 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 04/11] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 06/11] nvmet/rdma: Convert to struct nvmet_port_binding Nicholas A. Bellinger
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch introduces loopback support for a nvme host
controller per nvmet_port instance model, following what
we've done in drivers/target/loopback/ for allowing
multiple host LLDs to co-exist.

It changes nvme_loop_add_port() to use struct nvme_loop_port
and take the nvmf_host_add() reference, and invokes
device_register() to nvme_loop_driver_probe() to kick off
controller creation within nvme_loop_create_ctrl().

This allows nvme_loop_queue_rq to setup iod->req.port to
the per nvmet_port pointer, instead of a single hardcoded
global nvmet_loop_port.

Subsequently, it also adds nvme_loop_remove_port() to call
device_unregister() and call nvme_loop_del_ctrl() and
nvmf_free_options() to drop nvmet_port's struct nvmf_host
rereference, when the nvmet_port_binding is being removed
from the associated nvmet_subsys.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Jay Freyensee <james.p.freyensee@intel.com>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/loop.c | 205 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 185 insertions(+), 20 deletions(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 08b4fbb..e9f31d4 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -45,6 +45,13 @@ struct nvme_loop_iod {
 	struct scatterlist	first_sgl[];
 };
 
+struct nvme_loop_port {
+	struct device		dev;
+	struct nvmf_ctrl_options *opts;
+	struct nvme_ctrl	*ctrl;
+	struct nvmet_port	port;
+};
+
 struct nvme_loop_ctrl {
 	spinlock_t		lock;
 	struct nvme_loop_queue	*queues;
@@ -61,6 +68,8 @@ struct nvme_loop_ctrl {
 	struct nvmet_ctrl	*target_ctrl;
 	struct work_struct	delete_work;
 	struct work_struct	reset_work;
+
+	struct nvme_loop_port	*port;
 };
 
 static inline struct nvme_loop_ctrl *to_loop_ctrl(struct nvme_ctrl *ctrl)
@@ -74,8 +83,6 @@ struct nvme_loop_queue {
 	struct nvme_loop_ctrl	*ctrl;
 };
 
-static struct nvmet_port *nvmet_loop_port;
-
 static LIST_HEAD(nvme_loop_ctrl_list);
 static DEFINE_MUTEX(nvme_loop_ctrl_mutex);
 
@@ -172,7 +179,8 @@ static int nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 		return ret;
 
 	iod->cmd.common.flags |= NVME_CMD_SGL_METABUF;
-	iod->req.port = nvmet_loop_port;
+	iod->req.port = &queue->ctrl->port->port;
+
 	if (!nvmet_req_init(&iod->req, &queue->nvme_cq,
 			&queue->nvme_sq, &nvme_loop_ops)) {
 		nvme_cleanup_cmd(req);
@@ -599,6 +607,8 @@ out_destroy_queues:
 static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
 		struct nvmf_ctrl_options *opts)
 {
+	struct nvme_loop_port *loop_port = container_of(dev,
+				struct nvme_loop_port, dev);
 	struct nvme_loop_ctrl *ctrl;
 	bool changed;
 	int ret;
@@ -607,6 +617,7 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
 	if (!ctrl)
 		return ERR_PTR(-ENOMEM);
 	ctrl->ctrl.opts = opts;
+	ctrl->port = loop_port;
 	INIT_LIST_HEAD(&ctrl->list);
 
 	INIT_WORK(&ctrl->delete_work, nvme_loop_del_ctrl_work);
@@ -681,29 +692,135 @@ out_put_ctrl:
 	return ERR_PTR(ret);
 }
 
-static int nvme_loop_add_port(struct nvmet_port *port)
+static int nvme_loop_driver_probe(struct device *dev)
 {
-	/*
-	 * XXX: disalow adding more than one port so
-	 * there is no connection rejections when a
-	 * a subsystem is assigned to a port for which
-	 * loop doesn't have a pointer.
-	 * This scenario would be possible if we allowed
-	 * more than one port to be added and a subsystem
-	 * was assigned to a port other than nvmet_loop_port.
-	 */
+	struct nvme_loop_port *loop_port = container_of(dev,
+				struct nvme_loop_port, dev);
+	struct nvme_ctrl *ctrl;
 
-	if (nvmet_loop_port)
-		return -EPERM;
+	ctrl = nvme_loop_create_ctrl(dev, loop_port->opts);
+	if (IS_ERR(ctrl))
+		return PTR_ERR(ctrl);
 
-	nvmet_loop_port = port;
+	loop_port->ctrl = ctrl;
 	return 0;
 }
 
-static void nvme_loop_remove_port(struct nvmet_port *port)
+static int nvme_loop_driver_remove(struct device *dev)
+{
+	struct nvme_loop_port *loop_port = container_of(dev,
+				struct nvme_loop_port, dev);
+	struct nvme_ctrl *ctrl = loop_port->ctrl;
+	struct nvmf_ctrl_options *opts = loop_port->opts;
+
+	nvme_loop_del_ctrl(ctrl);
+	nvmf_free_options(opts);
+	return 0;
+}
+
+static int pseudo_bus_match(struct device *dev,
+			    struct device_driver *dev_driver)
+{
+	return 1;
+}
+
+static struct bus_type nvme_loop_bus = {
+	.name			= "nvme_loop_bus",
+	.match			= pseudo_bus_match,
+	.probe			= nvme_loop_driver_probe,
+	.remove			= nvme_loop_driver_remove,
+};
+
+static struct device_driver nvme_loop_driverfs = {
+	.name			= "nvme_loop",
+	.bus			= &nvme_loop_bus,
+};
+
+static void nvme_loop_release_adapter(struct device *dev)
 {
-	if (port == nvmet_loop_port)
-		nvmet_loop_port = NULL;
+	struct nvme_loop_port *loop_port = container_of(dev,
+				struct nvme_loop_port, dev);
+
+	kfree(loop_port);
+}
+
+static struct device *nvme_loop_primary;
+
+static int nvme_loop_add_port(struct nvmet_port_binding *pb)
+{
+	struct nvmet_subsys *subsys = pb->nf_subsys;
+	struct nvme_loop_port *loop_port;
+	struct nvmf_ctrl_options *opts;
+	struct device *dev;
+	int ret;
+
+	loop_port = kzalloc(sizeof(*loop_port), GFP_KERNEL);
+	if (!loop_port)
+		return -ENOMEM;
+
+	mutex_init(&loop_port->port.port_binding_mutex);
+	INIT_LIST_HEAD(&loop_port->port.port_binding_list);
+	loop_port->port.priv = loop_port;
+	loop_port->port.nf_subsys = pb->nf_subsys;
+	loop_port->port.nf_ops = pb->nf_ops;
+	pb->port = &loop_port->port;
+
+	opts = kzalloc(sizeof(*opts), GFP_KERNEL);
+	if (!opts) {
+		kfree(loop_port);
+		return -ENOMEM;
+	}
+	loop_port->opts = opts;
+
+	/* Set defaults */
+	opts->queue_size = NVMF_DEF_QUEUE_SIZE;
+	opts->nr_io_queues = num_online_cpus();
+	opts->tl_retry_count = 2;
+	opts->reconnect_delay = NVMF_DEF_RECONNECT_DELAY;
+	opts->kato = NVME_DEFAULT_KATO;
+
+	opts->host = nvmf_host_add(NULL);
+	if (!opts->host) {
+		kfree(opts);
+		kfree(loop_port);
+		return -ENOMEM;
+	}
+
+	opts->transport = kstrdup("loop", GFP_KERNEL);
+	opts->subsysnqn = kstrdup(subsys->subsysnqn, GFP_KERNEL);
+
+	dev = &loop_port->dev;
+	dev->bus = &nvme_loop_bus;
+	dev->parent = nvme_loop_primary;
+	dev->release = &nvme_loop_release_adapter;
+	dev_set_name(dev, "nvme_loop_ctrl:%s", subsys->subsysnqn);
+
+	nvmet_port_binding_enable(pb, &loop_port->port);
+
+	ret = device_register(dev);
+	if (ret) {
+		pr_err("device_register() failed: %d\n", ret);
+		nvmet_port_binding_disable(pb, &loop_port->port);
+		nvmf_free_options(opts);
+		kfree(loop_port);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void nvme_loop_remove_port(struct nvmet_port_binding *pb)
+{
+	struct nvmet_port *port = pb->port;
+	struct nvme_loop_port *loop_port;
+
+	if (!port)
+		return;
+
+	loop_port = container_of(port, struct nvme_loop_port, port);
+	nvmet_port_binding_disable(pb, &loop_port->port);
+
+	device_unregister(&loop_port->dev);
 }
 
 static struct nvmet_fabrics_ops nvme_loop_ops = {
@@ -720,13 +837,59 @@ static struct nvmf_transport_ops nvme_loop_transport = {
 	.create_ctrl	= nvme_loop_create_ctrl,
 };
 
+static int nvme_loop_alloc_core_bus(void)
+{
+	int ret;
+
+	nvme_loop_primary = root_device_register("nvme_loop_0");
+	if (IS_ERR(nvme_loop_primary)) {
+		pr_err("Unable to allocate nvme_loop_primary\n");
+		return PTR_ERR(nvme_loop_primary);
+	}
+
+	ret = bus_register(&nvme_loop_bus);
+	if (ret) {
+		pr_err("bus_register() failed for nvme_loop_bus\n");
+		goto dev_unreg;
+	}
+
+	ret = driver_register(&nvme_loop_driverfs);
+	if (ret) {
+		pr_err("driver_register() failed for"
+				" nvme_loop_driverfs\n");
+		goto bus_unreg;
+	}
+
+	return ret;
+
+bus_unreg:
+	bus_unregister(&nvme_loop_bus);
+dev_unreg:
+	root_device_unregister(nvme_loop_primary);
+	return ret;
+}
+
+static void nvme_loop_release_core_bus(void)
+{
+	driver_unregister(&nvme_loop_driverfs);
+	bus_unregister(&nvme_loop_bus);
+	root_device_unregister(nvme_loop_primary);
+}
+
 static int __init nvme_loop_init_module(void)
 {
 	int ret;
 
-	ret = nvmet_register_transport(&nvme_loop_ops);
+	ret = nvme_loop_alloc_core_bus();
 	if (ret)
 		return ret;
+
+	ret = nvmet_register_transport(&nvme_loop_ops);
+	if (ret) {
+		nvme_loop_release_core_bus();
+		return ret;
+	}
+
 	nvmf_register_transport(&nvme_loop_transport);
 	return 0;
 }
@@ -744,6 +907,8 @@ static void __exit nvme_loop_cleanup_module(void)
 	mutex_unlock(&nvme_loop_ctrl_mutex);
 
 	flush_scheduled_work();
+
+	nvme_loop_release_core_bus();
 }
 
 module_init(nvme_loop_init_module);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 06/11] nvmet/rdma: Convert to struct nvmet_port_binding
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (4 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 05/11] nvmet/loop: Add support for controller-per-port model + nvmet_port_binding Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 07/11] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops Nicholas A. Bellinger
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts nvmet/rdma to nvmet_port_binding in
configfs-ng, and introduces a nvmet_rdma_port that allows
for multiple nvmet_subsys nvmet_port_bindings to be mapped
to a single nvmet_rdma_port rdma_cm_id listener.

It moves rdma_cm_id setup into nvmet_rdma_listen_cmid(),
and rdma_cm_id destroy into nvmet_rmda_destroy_cmid()
using nvmet_rdma_port->ref.

It also updates nvmet_rdma_add_port() to do internal
port lookup matching traddr and trsvcid, and grabs
nvmet_rdma_port->ref if a matching port already exists.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Jay Freyensee <james.p.freyensee@intel.com>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/rdma.c | 127 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 114 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index fccb01d..62638f7af 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -118,6 +118,17 @@ struct nvmet_rdma_device {
 	struct list_head	entry;
 };
 
+struct nvmet_rdma_port {
+	atomic_t		enabled;
+
+	struct rdma_cm_id	*cm_id;
+	struct nvmf_disc_rsp_page_entry port_addr;
+
+	struct list_head	node;
+	struct kref		ref;
+	struct nvmet_port	port;
+};
+
 static bool nvmet_rdma_use_srq;
 module_param_named(use_srq, nvmet_rdma_use_srq, bool, 0444);
 MODULE_PARM_DESC(use_srq, "Use shared receive queue.");
@@ -129,6 +140,9 @@ static DEFINE_MUTEX(nvmet_rdma_queue_mutex);
 static LIST_HEAD(device_list);
 static DEFINE_MUTEX(device_list_mutex);
 
+static LIST_HEAD(nvmet_rdma_ports);
+static DEFINE_MUTEX(nvmet_rdma_ports_mutex);
+
 static bool nvmet_rdma_execute_command(struct nvmet_rdma_rsp *rsp);
 static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc);
@@ -1127,6 +1141,7 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 {
 	struct nvmet_rdma_device *ndev;
 	struct nvmet_rdma_queue *queue;
+	struct nvmet_rdma_port *rdma_port;
 	int ret = -EINVAL;
 
 	ndev = nvmet_rdma_find_get_device(cm_id);
@@ -1141,7 +1156,8 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 		ret = -ENOMEM;
 		goto put_device;
 	}
-	queue->port = cm_id->context;
+	rdma_port = cm_id->context;
+	queue->port = &rdma_port->port;
 
 	ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
 	if (ret)
@@ -1306,26 +1322,50 @@ static void nvmet_rdma_delete_ctrl(struct nvmet_ctrl *ctrl)
 		nvmet_rdma_queue_disconnect(queue);
 }
 
-static int nvmet_rdma_add_port(struct nvmet_port *port)
+static struct nvmet_rdma_port *nvmet_rdma_listen_cmid(struct nvmet_port_binding *pb)
 {
+	struct nvmet_rdma_port *rdma_port;
 	struct rdma_cm_id *cm_id;
 	struct sockaddr_in addr_in;
 	u16 port_in;
 	int ret;
 
-	ret = kstrtou16(port->disc_addr.trsvcid, 0, &port_in);
+	rdma_port = kzalloc(sizeof(*rdma_port), GFP_KERNEL);
+	if (!rdma_port)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&rdma_port->node);
+	kref_init(&rdma_port->ref);
+	mutex_init(&rdma_port->port.port_binding_mutex);
+	INIT_LIST_HEAD(&rdma_port->port.port_binding_list);
+	rdma_port->port.priv = rdma_port;
+	rdma_port->port.nf_subsys = pb->nf_subsys;
+	rdma_port->port.nf_ops = pb->nf_ops;
+	pb->port = &rdma_port->port;
+
+	memcpy(&rdma_port->port_addr, &pb->disc_addr,
+		sizeof(struct nvmf_disc_rsp_page_entry));
+
+	nvmet_port_binding_enable(pb, &rdma_port->port);
+
+	mutex_lock(&nvmet_rdma_ports_mutex);
+	list_add_tail(&rdma_port->node, &nvmet_rdma_ports);
+	mutex_unlock(&nvmet_rdma_ports_mutex);
+
+	ret = kstrtou16(pb->disc_addr.trsvcid, 0, &port_in);
 	if (ret)
-		return ret;
+		goto out_port_disable;
 
 	addr_in.sin_family = AF_INET;
-	addr_in.sin_addr.s_addr = in_aton(port->disc_addr.traddr);
+	addr_in.sin_addr.s_addr = in_aton(pb->disc_addr.traddr);
 	addr_in.sin_port = htons(port_in);
 
-	cm_id = rdma_create_id(&init_net, nvmet_rdma_cm_handler, port,
+	cm_id = rdma_create_id(&init_net, nvmet_rdma_cm_handler, rdma_port,
 			RDMA_PS_TCP, IB_QPT_RC);
 	if (IS_ERR(cm_id)) {
 		pr_err("CM ID creation failed\n");
-		return PTR_ERR(cm_id);
+		ret = PTR_ERR(cm_id);
+		goto out_port_disable;
 	}
 
 	ret = rdma_bind_addr(cm_id, (struct sockaddr *)&addr_in);
@@ -1340,21 +1380,82 @@ static int nvmet_rdma_add_port(struct nvmet_port *port)
 		goto out_destroy_id;
 	}
 
+	atomic_set(&rdma_port->enabled, 1);
 	pr_info("enabling port %d (%pISpc)\n",
-		le16_to_cpu(port->disc_addr.portid), &addr_in);
-	port->priv = cm_id;
-	return 0;
+		le16_to_cpu(pb->disc_addr.portid), &addr_in);
+
+	return rdma_port;
 
 out_destroy_id:
 	rdma_destroy_id(cm_id);
-	return ret;
+out_port_disable:
+	mutex_lock(&nvmet_rdma_ports_mutex);
+	list_del_init(&rdma_port->node);
+	mutex_unlock(&nvmet_rdma_ports_mutex);
+
+	nvmet_port_binding_disable(pb, &rdma_port->port);
+	kfree(rdma_port);
+	return ERR_PTR(ret);
 }
 
-static void nvmet_rdma_remove_port(struct nvmet_port *port)
+static void nvmet_rmda_destroy_cmid(struct kref *ref)
 {
-	struct rdma_cm_id *cm_id = port->priv;
+	struct nvmet_rdma_port *rdma_port = container_of(ref,
+				struct nvmet_rdma_port, ref);
+	struct rdma_cm_id *cm_id = rdma_port->cm_id;
+
+	mutex_lock(&nvmet_rdma_ports_mutex);
+	atomic_set(&rdma_port->enabled, 0);
+	list_del_init(&rdma_port->node);
+	mutex_unlock(&nvmet_rdma_ports_mutex);
 
 	rdma_destroy_id(cm_id);
+	kfree(rdma_port);
+}
+
+static int nvmet_rdma_add_port(struct nvmet_port_binding *pb)
+{
+	struct nvmet_rdma_port *rdma_port;
+	struct nvmf_disc_rsp_page_entry *pb_addr = &pb->disc_addr;
+
+	mutex_lock(&nvmet_rdma_ports_mutex);
+	list_for_each_entry(rdma_port, &nvmet_rdma_ports, node) {
+		struct nvmf_disc_rsp_page_entry *port_addr = &rdma_port->port_addr;
+
+		if (!strcmp(port_addr->traddr, pb_addr->traddr) &&
+		    !strcmp(port_addr->trsvcid, pb_addr->trsvcid)) {
+			if (!atomic_read(&rdma_port->enabled)) {
+				mutex_unlock(&nvmet_rdma_ports_mutex);
+				return -ENODEV;
+			}
+			kref_get(&rdma_port->ref);
+			mutex_unlock(&nvmet_rdma_ports_mutex);
+
+			nvmet_port_binding_enable(pb, &rdma_port->port);
+			return 0;
+		}
+	}
+	mutex_unlock(&nvmet_rdma_ports_mutex);
+
+	rdma_port = nvmet_rdma_listen_cmid(pb);
+	if (IS_ERR(rdma_port))
+		return PTR_ERR(rdma_port);
+
+	return 0;
+}
+
+static void nvmet_rdma_remove_port(struct nvmet_port_binding *pb)
+{
+	struct nvmet_port *port = pb->port;
+	struct nvmet_rdma_port *rdma_port;
+
+	if (!port)
+		return;
+
+	rdma_port = container_of(port, struct nvmet_rdma_port, port);
+	nvmet_port_binding_disable(pb, &rdma_port->port);
+
+	kref_put(&rdma_port->ref, nvmet_rmda_destroy_cmid);
 }
 
 static struct nvmet_fabrics_ops nvmet_rdma_ops = {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 07/11] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (5 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 06/11] nvmet/rdma: Convert to struct nvmet_port_binding Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 08/11] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache " Nicholas A. Bellinger
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts nvmet_execute_rw() to utilize sbc_ops->execute_rw()
for target_iostate + target_iomem based I/O submission into existing
backends drivers via configfs in /sys/kernel/config/target/core/.

This includes support for passing T10-PI scatterlists via target_iomem
into existing sbc_ops->execute_rw() logic, and is functioning with
IBLOCK, FILEIO, and RAMDISK.

Note the preceeding target/iblock patch absorbs inline bio + bvecs
and blk_poll() optimizations from Ming + Sagi in nvmet/io-cmd into
target_core_iblock.c code.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/io-cmd.c | 116 ++++++++++++++++++++++---------------------
 drivers/nvme/target/nvmet.h  |   7 +++
 2 files changed, 67 insertions(+), 56 deletions(-)

diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 38c2e97..133a14a 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -14,20 +14,16 @@
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 #include <linux/blkdev.h>
 #include <linux/module.h>
+#include <target/target_core_base.h>
+#include <target/target_core_backend.h>
 #include "nvmet.h"
 
-#if 0
-static void nvmet_bio_done(struct bio *bio)
+static void nvmet_complete_ios(struct target_iostate *ios, u16 status)
 {
-	struct nvmet_req *req = bio->bi_private;
-
-	nvmet_req_complete(req,
-		bio->bi_error ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
+	struct nvmet_req *req = container_of(ios, struct nvmet_req, t_iostate);
 
-	if (bio != &req->inline_bio)
-		bio_put(bio);
+	nvmet_req_complete(req, status ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
 }
-#endif
 
 static inline u32 nvmet_rw_len(struct nvmet_req *req)
 {
@@ -35,72 +31,80 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req)
 			req->ns->blksize_shift;
 }
 
-#if 0
-static void nvmet_inline_bio_init(struct nvmet_req *req)
-{
-	struct bio *bio = &req->inline_bio;
-
-	bio_init(bio);
-	bio->bi_max_vecs = NVMET_MAX_INLINE_BIOVEC;
-	bio->bi_io_vec = req->inline_bvec;
-}
-#endif
-
 static void nvmet_execute_rw(struct nvmet_req *req)
 {
-#if 0
-	int sg_cnt = req->sg_cnt;
-	struct scatterlist *sg;
-	struct bio *bio;
+	struct target_iostate *ios = &req->t_iostate;
+	struct target_iomem *iomem = &req->t_iomem;
+	struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+	struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
 	sector_t sector;
-	blk_qc_t cookie;
-	int rw, i;
-#endif
+	enum dma_data_direction data_direction;
+	sense_reason_t rc;
+	bool fua_write = false, prot_enabled = false;
+
+	if (!sbc_ops || !sbc_ops->execute_rw) {
+		nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+		return;
+	}
+
 	if (!req->sg_cnt) {
 		nvmet_req_complete(req, 0);
 		return;
 	}
-#if 0
+
 	if (req->cmd->rw.opcode == nvme_cmd_write) {
 		if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
-			rw = WRITE_FUA;
-		else
-			rw = WRITE;
+			fua_write = true;
+
+		data_direction = DMA_TO_DEVICE;
 	} else {
-		rw = READ;
+		data_direction = DMA_FROM_DEVICE;
 	}
 
 	sector = le64_to_cpu(req->cmd->rw.slba);
 	sector <<= (req->ns->blksize_shift - 9);
 
-	nvmet_inline_bio_init(req);
-	bio = &req->inline_bio;
-	bio->bi_bdev = req->ns->bdev;
-	bio->bi_iter.bi_sector = sector;
-	bio->bi_private = req;
-	bio->bi_end_io = nvmet_bio_done;
-
-	for_each_sg(req->sg, sg, req->sg_cnt, i) {
-		while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
-				!= sg->length) {
-			struct bio *prev = bio;
-
-			bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
-			bio->bi_bdev = req->ns->bdev;
-			bio->bi_iter.bi_sector = sector;
-
-			bio_chain(bio, prev);
-			cookie = submit_bio(rw, prev);
-		}
+	ios->t_task_lba = sector;
+	ios->data_length = nvmet_rw_len(req);
+	ios->data_direction = data_direction;
+	iomem->t_data_sg = req->sg;
+	iomem->t_data_nents = req->sg_cnt;
+	iomem->t_prot_sg = req->prot_sg;
+	iomem->t_prot_nents = req->prot_sg_cnt;
+
+	// XXX: Make common between sbc_check_prot and nvme-target
+	switch (dev->dev_attrib.pi_prot_type) {
+	case TARGET_DIF_TYPE3_PROT:
+		ios->reftag_seed = 0xffffffff;
+		prot_enabled = true;
+		break;
+	case TARGET_DIF_TYPE1_PROT:
+		ios->reftag_seed = ios->t_task_lba;
+		prot_enabled = true;
+		break;
+	default:
+		break;
+	}
 
-		sector += sg->length >> 9;
-		sg_cnt--;
+	if (prot_enabled) {
+		ios->prot_type = dev->dev_attrib.pi_prot_type;
+		ios->prot_length = dev->prot_length *
+				       (le16_to_cpu(req->cmd->rw.length) + 1);
+#if 0
+		printk("req->cmd->rw.length: %u\n", le16_to_cpu(req->cmd->rw.length));
+		printk("nvmet_rw_len: %u\n", nvmet_rw_len(req));
+		printk("req->se_cmd.prot_type: %d\n", req->se_cmd.prot_type);
+		printk("req->se_cmd.prot_length: %u\n", req->se_cmd.prot_length);
+#endif
 	}
 
-	cookie = submit_bio(rw, bio);
+	ios->se_dev = dev;
+	ios->iomem = iomem;
+	ios->t_comp_func = &nvmet_complete_ios;
 
-	blk_poll(bdev_get_queue(req->ns->bdev), cookie);
-#endif
+	rc = sbc_ops->execute_rw(ios, iomem->t_data_sg, iomem->t_data_nents,
+				 ios->data_direction, fua_write,
+				 &nvmet_complete_ios);
 }
 
 static void nvmet_execute_flush(struct nvmet_req *req)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index af616d0..a3ab4fb 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -25,6 +25,7 @@
 #include <linux/configfs.h>
 #include <linux/rcupdate.h>
 #include <linux/blkdev.h>
+#include <target/target_core_base.h>
 
 #define NVMET_ASYNC_EVENTS		4
 #define NVMET_ERROR_LOG_SLOTS		128
@@ -262,6 +263,12 @@ struct nvmet_req {
 	int			sg_cnt;
 	size_t			data_len;
 
+	struct scatterlist	*prot_sg;
+	int			prot_sg_cnt;
+
+	struct target_iostate	t_iostate;
+	struct target_iomem	t_iomem;
+
 	struct nvmet_port	*port;
 
 	void (*execute)(struct nvmet_req *req);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 08/11] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache backend ops
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (6 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 07/11] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 09/11] nvmet/io-cmd: Hookup sbc_ops->execute_unmap " Nicholas A. Bellinger
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts nvmet_execute_flush() to utilize
sbc_ops->execute_sync_cache() for target_iostate
submission into existing backends drivers via
configfs in /sys/kernel/config/target/core/.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/io-cmd.c | 21 ++++++++++++---------
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 133a14a..23905a8 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -109,18 +109,21 @@ static void nvmet_execute_rw(struct nvmet_req *req)
 
 static void nvmet_execute_flush(struct nvmet_req *req)
 {
-#if 0
-	struct bio *bio;
+	struct target_iostate *ios = &req->t_iostate;
+	struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+	struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
+	sense_reason_t rc;
 
-	nvmet_inline_bio_init(req);
-	bio = &req->inline_bio;
+	if (!sbc_ops || !sbc_ops->execute_sync_cache) {
+		nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+		return;
+	}
 
-	bio->bi_bdev = req->ns->bdev;
-	bio->bi_private = req;
-	bio->bi_end_io = nvmet_bio_done;
+	ios->se_dev = dev;
+	ios->iomem = NULL;
+	ios->t_comp_func = &nvmet_complete_ios;
 
-	submit_bio(WRITE_FLUSH, bio);
-#endif
+	rc = sbc_ops->execute_sync_cache(ios, false);
 }
 
 #if 0
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 09/11] nvmet/io-cmd: Hookup sbc_ops->execute_unmap backend ops
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (7 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 08/11] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache " Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 10/11] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits Nicholas A. Bellinger
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts nvmet_execute_discard() to utilize
sbc_ops->execute_unmap() for target_iostate submission
into existing backends drivers via configfs in
/sys/kernel/config/target/core/.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/io-cmd.c | 47 ++++++++++++++++++++++++++++++++------------
 1 file changed, 34 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 23905a8..605f560 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -126,52 +126,73 @@ static void nvmet_execute_flush(struct nvmet_req *req)
 	rc = sbc_ops->execute_sync_cache(ios, false);
 }
 
-#if 0
-static u16 nvmet_discard_range(struct nvmet_ns *ns,
-		struct nvme_dsm_range *range, int type, struct bio **bio)
+static u16 nvmet_discard_range(struct nvmet_req *req, struct sbc_ops *sbc_ops,
+		struct nvme_dsm_range *range, struct bio **bio)
 {
-	if (__blkdev_issue_discard(ns->bdev,
+	struct nvmet_ns *ns = req->ns;
+	sense_reason_t rc;
+
+	rc = sbc_ops->execute_unmap(&req->t_iostate,
 			le64_to_cpu(range->slba) << (ns->blksize_shift - 9),
 			le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
-			GFP_KERNEL, type, bio))
+			bio);
+	if (rc)
 		return NVME_SC_INTERNAL | NVME_SC_DNR;
 
 	return 0;
 }
-#endif
+
+static void nvmet_discard_bio_done(struct bio *bio)
+{
+	struct nvmet_req *req = bio->bi_private;
+	int err = bio->bi_error;
+
+	bio_put(bio);
+	nvmet_req_complete(req, err ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
+}
 
 static void nvmet_execute_discard(struct nvmet_req *req)
 {
-#if 0
-	struct nvme_dsm_range range;
+	struct target_iostate *ios = &req->t_iostate;
+	struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+	struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
 	struct bio *bio = NULL;
-	int type = REQ_WRITE | REQ_DISCARD, i;
+	struct nvme_dsm_range range;
+	int i;
 	u16 status;
 
+	if (!sbc_ops || !sbc_ops->execute_unmap) {
+		nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+		return;
+	}
+
+	ios->se_dev = dev;
+	ios->iomem = NULL;
+	ios->t_comp_func = NULL;
+
 	for (i = 0; i <= le32_to_cpu(req->cmd->dsm.nr); i++) {
 		status = nvmet_copy_from_sgl(req, i * sizeof(range), &range,
 				sizeof(range));
 		if (status)
 			break;
 
-		status = nvmet_discard_range(req->ns, &range, type, &bio);
+		status = nvmet_discard_range(req, sbc_ops, &range, &bio);
 		if (status)
 			break;
 	}
 
 	if (bio) {
 		bio->bi_private = req;
-		bio->bi_end_io = nvmet_bio_done;
+		bio->bi_end_io = nvmet_discard_bio_done;
 		if (status) {
 			bio->bi_error = -EIO;
 			bio_endio(bio);
 		} else {
-			submit_bio(type, bio);
+			submit_bio(REQ_WRITE | REQ_DISCARD, bio);
 		}
 	} else {
 		nvmet_req_complete(req, status);
 	}
-#endif
 }
 
 static void nvmet_execute_dsm(struct nvmet_req *req)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 10/11] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (8 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 09/11] nvmet/io-cmd: Hookup sbc_ops->execute_unmap " Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14  4:35 ` [RFC-v2 11/11] nvmet/loop: Add support for bio integrity handling Nicholas A. Bellinger
  2016-06-14 14:52 ` [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Christoph Hellwig
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch updates nvmet_execute_identify_ns() to report
target-core backend T10-PI related feature bits to the
NVMe host controller.

Note this assumes support for NVME_NS_DPC_PI_TYPE1 and
NVME_NS_DPC_PI_TYPE3 as reported by backend drivers via
/sys/kernel/config/target/core/*/*/attrib/pi_prot_type.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/admin-cmd.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 240e323..3a808dc 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -200,6 +200,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
 {
 	struct nvmet_ns *ns;
 	struct nvme_id_ns *id;
+	struct se_device *dev;
 	u16 status = 0;
 
 	ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
@@ -228,6 +229,22 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
 	id->nlbaf = 0;
 	id->flbas = 0;
 
+	/* Populate bits for T10-PI from se_device backend */
+	rcu_read_lock();
+	dev = rcu_dereference(ns->dev);
+	if (dev && dev->dev_attrib.pi_prot_type) {
+		int pi_prot_type = dev->dev_attrib.pi_prot_type;
+
+		id->lbaf[0].ms = cpu_to_le16(sizeof(struct t10_pi_tuple));
+		printk("nvmet_set_id_ns: ms: %u\n", id->lbaf[0].ms);
+
+		if (pi_prot_type == 1)
+			id->dps = NVME_NS_DPC_PI_TYPE1;
+		else if (pi_prot_type == 3)
+			id->dps = NVME_NS_DPC_PI_TYPE3;
+	}
+	rcu_read_unlock();
+
 	/*
 	 * Our namespace might always be shared.  Not just with other
 	 * controllers, but also with any other user of the block device.
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC-v2 11/11] nvmet/loop: Add support for bio integrity handling
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (9 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 10/11] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits Nicholas A. Bellinger
@ 2016-06-14  4:35 ` Nicholas A. Bellinger
  2016-06-14 14:52 ` [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Christoph Hellwig
  11 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-14  4:35 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, linux-nvme, Jens Axboe, Christoph Hellwig,
	Keith Busch, Jay Freyensee, Martin Petersen, Sagi Grimberg,
	Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for nvme/loop block integrity,
based upon the reported ID_NS.ms + ID_NS.dps feature
bits in nvmet_execute_identify_ns().

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/nvme/target/loop.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index e9f31d4..480a7ef 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -42,6 +42,7 @@ struct nvme_loop_iod {
 	struct nvme_loop_queue	*queue;
 	struct work_struct	work;
 	struct sg_table		sg_table;
+	struct scatterlist	meta_sg;
 	struct scatterlist	first_sgl[];
 };
 
@@ -201,6 +202,23 @@ static int nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 		BUG_ON(iod->req.sg_cnt > req->nr_phys_segments);
 	}
 
+	if (blk_integrity_rq(req)) {
+		int count;
+
+		if (blk_rq_count_integrity_sg(hctx->queue, req->bio) != 1)
+			BUG_ON(1);
+
+		sg_init_table(&iod->meta_sg, 1);
+		count = blk_rq_map_integrity_sg(hctx->queue, req->bio,
+						&iod->meta_sg);
+
+		iod->req.prot_sg = &iod->meta_sg;
+		iod->req.prot_sg_cnt = 1;
+
+		pr_debug("nvme/loop: Set prot_sg %p and prot_sg_cnt: %d\n",
+			iod->req.prot_sg, iod->req.prot_sg_cnt);
+	}
+
 	iod->cmd.common.command_id = req->tag;
 	blk_mq_start_request(req);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs
  2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
                   ` (10 preceding siblings ...)
  2016-06-14  4:35 ` [RFC-v2 11/11] nvmet/loop: Add support for bio integrity handling Nicholas A. Bellinger
@ 2016-06-14 14:52 ` Christoph Hellwig
  11 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2016-06-14 14:52 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: target-devel, linux-scsi, linux-nvme, Jens Axboe,
	Christoph Hellwig, Keith Busch, Jay Freyensee, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn

On Tue, Jun 14, 2016 at 04:35:35AM +0000, Nicholas A. Bellinger wrote:
> Comments..?

Still no good reason for doing anything like this.

On a conceptual level:

The NVMe target is front end implementing a simple protocol to export
block devices to a remove host.  The SCSI target is larger front end to
expose a more complex protocol to remote hosts.  None of them should
actually implement any real protocol independent behavior, and except for
persistent reservations in the SCSI neither of them does.

On a practical level it means we drag in over 25.000 lines of code
as a dependency, without actually dropping any code in the nvmet module,
and vastly more complicated object hierarchies that don't make any
sense for the tight-knit NVMe standard.  We'd also get dragged into
into the nightmare of diverging an incompatible user space tooling,
and we'd lose all the test coverage we've built up.  We'd also have
to deal with tons of tunables that neither fit the protocol we
implement nor the philosophy of the project.





^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-06-14 14:52 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-14  4:35 [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 01/11] nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 02/11] nvmet: Add nvmet_fabric_ops get/put transport helpers Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 03/11] nvmet: Add support for configfs-ng multi-tenant logic Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 04/11] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 05/11] nvmet/loop: Add support for controller-per-port model + nvmet_port_binding Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 06/11] nvmet/rdma: Convert to struct nvmet_port_binding Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 07/11] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 08/11] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache " Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 09/11] nvmet/io-cmd: Hookup sbc_ops->execute_unmap " Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 10/11] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits Nicholas A. Bellinger
2016-06-14  4:35 ` [RFC-v2 11/11] nvmet/loop: Add support for bio integrity handling Nicholas A. Bellinger
2016-06-14 14:52 ` [RFC-v2 00/11] nvmet: Add support for multi-tenant configfs Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).