From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Guixin Liu <kanie@linux.alibaba.com>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"hch@lst.de" <hch@lst.de>,
Chaitanya Kulkarni <chaitanyak@nvidia.com>,
"sagi@grimberg.me" <sagi@grimberg.me>
Subject: Re: [PATCH v2] nvmet: make nvmet_wq visible in sysfs
Date: Thu, 31 Oct 2024 06:23:01 +0000 [thread overview]
Message-ID: <3cb63cfd-a4d5-4516-a2eb-7b968a036f7a@nvidia.com> (raw)
In-Reply-To: <20241031022720.27202-1-kanie@linux.alibaba.com>
On 10/30/24 19:27, Guixin Liu wrote:
> In some complex scenarios, we deploy multiple tasks on a single machine
> (hybrid deployment), such as:
> 1. Docker containers for function computation (background processing).
> 2. Docker containers for real-time tasks.
> 3. Docker containers for monitoring, event handling, and management.
> 4. An NVMe target server.
> Each of these components is restricted to its own CPU cores to prevent
> mutual interference and ensure strict isolation. Additionally, we make
> the nvmet_wq visible in sysfs, allowing for tuning its attributes
> through sysfs, such as cpumask.
How about following ? no need to send V3 can be done at
the time of applying the patch if you are okay with it :-
" In some complex scenarios, we deploy multiple taskson asingle machine
(hybrid deployment), suchas Docker containersfor function computation
(background processing), real-time tasks, monitoring,event handling,
and management, alongwith an NVMe target server.
Each of these componentsis restrictedto its own CPU coresto prevent
mutual interferenceand ensurestrict isolation.To achieve this level
of isolation for nvmet_wq we needto use sysfs tunables such as
cpumask that are currently not accessible.
Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
workqueue tunables are exported in the userspace via sysfs.
with this patch :-
nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
affinity_scope affinity_strict cpumask max_active nice per_cpu
power subsystem uevent
"
With that looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
next prev parent reply other threads:[~2024-10-31 6:23 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-31 2:27 [PATCH v2] nvmet: make nvmet_wq visible in sysfs Guixin Liu
2024-10-31 6:23 ` Chaitanya Kulkarni [this message]
2024-10-31 6:38 ` Guixin Liu
2024-10-31 6:50 ` Chaitanya Kulkarni
2024-10-31 6:57 ` hch
2024-10-31 7:00 ` Guixin Liu
2024-11-05 16:36 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3cb63cfd-a4d5-4516-a2eb-7b968a036f7a@nvidia.com \
--to=chaitanyak@nvidia.com \
--cc=hch@lst.de \
--cc=kanie@linux.alibaba.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox