public inbox for linux-pci@vger.kernel.org
 help / color / mirror / Atom feed
From: Damien Le Moal <dlemoal@kernel.org>
To: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: linux-nvme@lists.infradead.org, "Christoph Hellwig" <hch@lst.de>,
	"Keith Busch" <kbusch@kernel.org>,
	"Sagi Grimberg" <sagi@grimberg.me>,
	linux-pci@vger.kernel.org, "Krzysztof Wilczyński" <kw@linux.com>,
	"Kishon Vijay Abraham I" <kishon@kernel.org>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
	"Rick Wertenbroek" <rick.wertenbroek@gmail.com>,
	"Niklas Cassel" <cassel@kernel.org>
Subject: Re: [PATCH v4 18/18] Documentation: Document the NVMe PCI endpoint target driver
Date: Tue, 17 Dec 2024 09:40:29 -0800	[thread overview]
Message-ID: <5c8ad404-6121-4aba-ad39-73794bf8532f@kernel.org> (raw)
In-Reply-To: <20241217173003.sqz67o24z5co7dck@thinkpad>

On 2024/12/17 9:30, Manivannan Sadhasivam wrote:
>> +Now, create a subsystem and a port that we will use to create a PCI target
>> +controller when setting up the NVMe PCI endpoint target device. In this
>> +example, the port is created with a maximum of 4 I/O queue pairs::
>> +
>> +        # cd /sys/kernel/config/nvmet/subsystems
>> +        # mkdir nvmepf.0.nqn
>> +        # echo -n "Linux-nvmet-pciep" > nvmepf.0.nqn/attr_model
>> +        # echo "0x1b96" > nvmepf.0.nqn/attr_vendor_id
>> +        # echo "0x1b96" > nvmepf.0.nqn/attr_subsys_vendor_id
>> +        # echo 1 > nvmepf.0.nqn/attr_allow_any_host
>> +        # echo 4 > nvmepf.0.nqn/attr_qid_max
>> +
>> +Next, create and enable the subsystem namespace using the null_blk block device::
>> +
>> +        # mkdir nvmepf.0.nqn/namespaces/1
>> +        # echo -n "/dev/nullb0" > nvmepf.0.nqn/namespaces/1/device_path
>> +        # echo 1 > "pci_epf_nvme.0.nqn/namespaces/1/enable"
> 
> I have to do, 'echo 1 > nvmepf.0.nqn/namespaces/1/enable'

Good catch. That is the old name from previous version. Will fix this.

>> +
>> +Finally, create the target port and link it to the subsystem::
>> +
>> +        # cd /sys/kernel/config/nvmet/ports
>> +        # mkdir 1
>> +        # echo -n "pci" > 1/addr_trtype
>> +        # ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
>> +                /sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn
>> +
>> +Creating a NVMe PCI Endpoint Device
>> +-----------------------------------
>> +
>> +With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint
>> +device can now be created and enabled. The NVMe PCI endpoint target driver
>> +should already be loaded (that is done automatically when the port is created)::
>> +
>> +        # ls /sys/kernel/config/pci_ep/functions
>> +        nvmet_pciep
>> +
>> +Next, create function 0::
>> +
>> +        # cd /sys/kernel/config/pci_ep/functions/nvmet_pciep
>> +        # mkdir nvmepf.0
>> +        # ls nvmepf.0/
>> +        baseclass_code    msix_interrupts   secondary
>> +        cache_line_size   nvme              subclass_code
>> +        deviceid          primary           subsys_id
>> +        interrupt_pin     progif_code       subsys_vendor_id
>> +        msi_interrupts    revid             vendorid
>> +
>> +Configure the function using any vendor ID and device ID::
>> +
>> +        # cd /sys/kernel/config/pci_ep/functions/nvmet_pciep
>> +        # echo 0x1b96 > nvmepf.0/vendorid
>> +        # echo 0xBEEF > nvmepf.0/deviceid
>> +        # echo 32 > nvmepf.0/msix_interrupts
>> +
>> +If the PCI endpoint controller used does not support MSIX, MSI can be
>> +configured instead::
>> +
>> +        # echo 32 > nvmepf.0/msi_interrupts
>> +
>> +Next, let's bind our endpoint device with the target subsystem and port that we
>> +created::
>> +
>> +        # echo 1 > nvmepf.0/portid
> 
> 	'echo 1 > nvmepf.0/nvme/portid'
> 
>> +        # echo "nvmepf.0.nqn" > nvmepf.0/subsysnqn
> 
> 	'echo 1 > nvmepf.0/nvme/subsysnqn'

Yep. Good catch.

> 
>> +
>> +The endpoint function can then be bound to the endpoint controller and the
>> +controller started::
>> +
>> +        # cd /sys/kernel/config/pci_ep
>> +        # ln -s functions/nvmet_pciep/nvmepf.0 controllers/a40000000.pcie-ep/
>> +        # echo 1 > controllers/a40000000.pcie-ep/start
>> +
>> +On the endpoint machine, kernel messages will show information as the NVMe
>> +target device and endpoint device are created and connected.
>> +
> 
> For some reason, I cannot get the function driver working. Getting this warning
> on the ep:
> 
> 	nvmet: connect request for invalid subsystem 1!
> 
> I didn't debug it further. Will do it tomorrow morning and let you know.

Hmmm... Weird. You should not ever see a connect request/command at all.
Can you try this script:

https://github.com/damien-lemoal/buildroot/blob/rock5b_ep_v25/board/radxa/rock5b-ep/overlay/root/pci-ep/nvmet-pciep

Just run "./nvmet-pciep start" after booting the endpoint board.

The command example in the documentation is an extract from what this script
does. I think that:

	echo 1 > ${SUBSYSNQN}/attr_allow_any_host

missing may be the reason for this error.


-- 
Damien Le Moal
Western Digital Research

  reply	other threads:[~2024-12-17 17:40 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-12 11:34 [PATCH v4 00/18] NVMe PCI endpoint target driver Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 01/18] nvme: Move opcode string helper functions declarations Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 02/18] nvmet: Add vendor_id and subsys_vendor_id subsystem attributes Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 03/18] nvmet: Export nvmet_update_cc() and nvmet_cc_xxx() helpers Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 04/18] nvmet: Introduce nvmet_get_cmd_effects_admin() Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 05/18] nvmet: Add drvdata field to struct nvmet_ctrl Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 06/18] nvme: Add PCI transport type Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 07/18] nvmet: Improve nvmet_alloc_ctrl() interface and implementation Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 08/18] nvmet: Introduce nvmet_req_transfer_len() Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 09/18] nvmet: Introduce nvmet_sq_create() and nvmet_cq_create() Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 10/18] nvmet: Add support for I/O queue management admin commands Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 11/18] nvmet: Do not require SGL for PCI target controller commands Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 12/18] nvmet: Introduce get/set_feature controller operations Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 13/18] nvmet: Implement host identifier set feature support Damien Le Moal
2024-12-12 18:50   ` Bjorn Helgaas
2024-12-12 11:34 ` [PATCH v4 14/18] nvmet: Implement interrupt coalescing " Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 15/18] nvmet: Implement interrupt config " Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 16/18] nvmet: Implement arbitration " Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 17/18] nvmet: New NVMe PCI endpoint target driver Damien Le Moal
2024-12-12 18:55   ` Bjorn Helgaas
2024-12-14  5:52     ` Damien Le Moal
2024-12-13 16:59   ` Niklas Cassel
2024-12-16 16:35     ` Vinod Koul
2024-12-16 19:12       ` Damien Le Moal
2024-12-17  5:27         ` Vinod Koul
2024-12-17  6:21           ` Manivannan Sadhasivam
2024-12-17  9:01             ` Manivannan Sadhasivam
2024-12-17 15:59               ` Niklas Cassel
2024-12-17 16:04                 ` [PATCH 1/3] dmaengine: dw-edma: Add support for DMA_MEMCPY Niklas Cassel
2024-12-17 16:04                   ` [PATCH 2/3] PCI: endpoint: pci-epf-test: Use private DMA_MEMCPY channel Niklas Cassel
2024-12-17 16:04                   ` [PATCH 3/3] debug prints - DO NOT MERGE Niklas Cassel
2024-12-18 18:37                 ` [PATCH v4 17/18] nvmet: New NVMe PCI endpoint target driver Manivannan Sadhasivam
2024-12-18 18:01       ` Niklas Cassel
2024-12-17  8:53   ` Manivannan Sadhasivam
2024-12-17 14:35     ` Damien Le Moal
2024-12-17 16:41       ` Manivannan Sadhasivam
2024-12-17 17:03         ` Damien Le Moal
2024-12-17 17:17           ` Manivannan Sadhasivam
2024-12-19  5:47         ` Christoph Hellwig
2024-12-19  5:45       ` Christoph Hellwig
2024-12-12 11:34 ` [PATCH v4 18/18] Documentation: Document the " Damien Le Moal
2024-12-12 18:48   ` Bjorn Helgaas
2024-12-17 17:30   ` Manivannan Sadhasivam
2024-12-17 17:40     ` Damien Le Moal [this message]
2024-12-16  6:07 ` [PATCH v4 00/18] " Manivannan Sadhasivam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5c8ad404-6121-4aba-ad39-73794bf8532f@kernel.org \
    --to=dlemoal@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=cassel@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kishon@kernel.org \
    --cc=kw@linux.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=manivannan.sadhasivam@linaro.org \
    --cc=rick.wertenbroek@gmail.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox