public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Chaitanya Kulkarni <kch@nvidia.com>, linux-nvme@lists.infradead.org
Cc: hch@lst.de, sagi@grimberg.me
Subject: Re: [PATCH] nvmet: dynamically allocate nvmet_ns->nguid
Date: Wed, 3 May 2023 09:45:12 -0600	[thread overview]
Message-ID: <b9688be8-fd92-99df-7083-c8a695e77377@kernel.dk> (raw)
In-Reply-To: <20230502053051.14621-1-kch@nvidia.com>

On 5/1/23 11:30?PM, Chaitanya Kulkarni wrote:
> The nvmet_ns struct is critical to I/O operations in each backend bdev
> and file, but its static nguid array is not accessed in the fast path.
> This means that pulling all the memory for the array on each access
> is inefficient.
> 
> This patch dynamically allocates the nvmet_ns->nguid array, reducing the
> size of the nvmet_ns struct. This optimization should reduce unnecessary
> memory access in the fast path that is required for the array vs pointer.
> For allocation of nguid with kzalloc() use same policy GFP_KERNEL that is
> used to allocate nvmet_ns struct iself.

Replacing 16 bytes of embedded memory with 8 bytes and then alloc+free
seems like a poor tradeoff.

Why not just arrange it a bit more sanely, and also push the config
stuff out-of-line as that would not be used in the fast path. The below
30 second job takes it from 456 -> 440 bytes for me, and has a better
layout imho.
 

diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index dc60a22646f7..790d7513e442 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -61,29 +61,31 @@ struct nvmet_ns {
 	struct block_device	*bdev;
 	struct file		*file;
 	bool			readonly;
+	bool			buffered_io;
+	bool			enabled;
+	u8			csi;
 	u32			nsid;
 	u32			blksize_shift;
+	u32			anagrpid;
 	loff_t			size;
 	u8			nguid[16];
 	uuid_t			uuid;
-	u32			anagrpid;
 
-	bool			buffered_io;
-	bool			enabled;
 	struct nvmet_subsys	*subsys;
 	const char		*device_path;
 
-	struct config_group	device_group;
-	struct config_group	group;
-
 	struct completion	disable_done;
 	mempool_t		*bvec_pool;
 
-	int			use_p2pmem;
 	struct pci_dev		*p2p_dev;
+	int			use_p2pmem;
 	int			pi_type;
 	int			metadata_size;
-	u8			csi;
+
+	struct config_group	device_group;
+	struct config_group	group;
+
+
 };
 
 static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)

-- 
Jens Axboe



  reply	other threads:[~2023-05-03 15:45 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-02  5:30 [PATCH] nvmet: dynamically allocate nvmet_ns->nguid Chaitanya Kulkarni
2023-05-03 15:45 ` Jens Axboe [this message]
2023-05-03 22:39   ` Chaitanya Kulkarni
2023-05-04 17:55     ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b9688be8-fd92-99df-7083-c8a695e77377@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox