Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] nvmet: make nvmet_wq visible in sysfs
@ 2024-10-31  2:27 Guixin Liu
  2024-10-31  6:23 ` Chaitanya Kulkarni
  0 siblings, 1 reply; 7+ messages in thread
From: Guixin Liu @ 2024-10-31  2:27 UTC (permalink / raw)
  To: hch, sagi, kch; +Cc: linux-nvme

In some complex scenarios, we deploy multiple tasks on a single machine
(hybrid deployment), such as:
  1. Docker containers for function computation (background processing).
  2. Docker containers for real-time tasks.
  3. Docker containers for monitoring, event handling, and management.
  4. An NVMe target server.
Each of these components is restricted to its own CPU cores to prevent
mutual interference and ensure strict isolation. Additionally, we make
the nvmet_wq visible in sysfs, allowing for tuning its attributes
through sysfs, such as cpumask.

Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
---
 drivers/nvme/target/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index ed2424f8a396..15b25f464e77 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1717,7 +1717,7 @@ static int __init nvmet_init(void)
 		goto out_free_zbd_work_queue;
 
 	nvmet_wq = alloc_workqueue("nvmet-wq",
-			WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
+			WQ_MEM_RECLAIM | WQ_UNBOUND | WQ_SYSFS, 0);
 	if (!nvmet_wq)
 		goto out_free_buffered_work_queue;
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] nvmet: make nvmet_wq visible in sysfs
  2024-10-31  2:27 [PATCH v2] nvmet: make nvmet_wq visible in sysfs Guixin Liu
@ 2024-10-31  6:23 ` Chaitanya Kulkarni
  2024-10-31  6:38   ` Guixin Liu
  0 siblings, 1 reply; 7+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-31  6:23 UTC (permalink / raw)
  To: Guixin Liu
  Cc: linux-nvme@lists.infradead.org, hch@lst.de, Chaitanya Kulkarni,
	sagi@grimberg.me

On 10/30/24 19:27, Guixin Liu wrote:
> In some complex scenarios, we deploy multiple tasks on a single machine
> (hybrid deployment), such as:
>    1. Docker containers for function computation (background processing).
>    2. Docker containers for real-time tasks.
>    3. Docker containers for monitoring, event handling, and management.
>    4. An NVMe target server.
> Each of these components is restricted to its own CPU cores to prevent
> mutual interference and ensure strict isolation. Additionally, we make
> the nvmet_wq visible in sysfs, allowing for tuning its attributes
> through sysfs, such as cpumask.


How about following ? no need to send V3 can be done at
the time of applying the patch if you are okay with it :-

" In  some complex scenarios, we deploy multiple taskson  asingle  machine
(hybrid deployment), suchas  Docker containersfor  function  computation
(background processing), real-time tasks, monitoring,event  handling,
and  management, alongwith  an NVMe target server.

Each  of  these componentsis  restrictedto  its own CPU coresto  prevent
mutual interferenceand  ensurestrict  isolation.To  achieve this level
of  isolation for nvmet_wq we needto  use sysfs tunables such as
cpumask that are currently not accessible.

Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
workqueue tunables are exported in the userspace via sysfs.

with this patch :-

nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
affinity_scope  affinity_strict  cpumask  max_active  nice  per_cpu
power  subsystem  uevent

"

With that looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] nvmet: make nvmet_wq visible in sysfs
  2024-10-31  6:23 ` Chaitanya Kulkarni
@ 2024-10-31  6:38   ` Guixin Liu
  2024-10-31  6:50     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 7+ messages in thread
From: Guixin Liu @ 2024-10-31  6:38 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me


在 2024/10/31 14:23, Chaitanya Kulkarni 写道:
> On 10/30/24 19:27, Guixin Liu wrote:
>> In some complex scenarios, we deploy multiple tasks on a single machine
>> (hybrid deployment), such as:
>>     1. Docker containers for function computation (background processing).
>>     2. Docker containers for real-time tasks.
>>     3. Docker containers for monitoring, event handling, and management.
>>     4. An NVMe target server.
>> Each of these components is restricted to its own CPU cores to prevent
>> mutual interference and ensure strict isolation. Additionally, we make
>> the nvmet_wq visible in sysfs, allowing for tuning its attributes
>> through sysfs, such as cpumask.
>
> How about following ? no need to send V3 can be done at
> the time of applying the patch if you are okay with it :-
>
> " In  some complex scenarios, we deploy multiple taskson  asingle  machine
> (hybrid deployment), suchas  Docker containersfor  function  computation
> (background processing), real-time tasks, monitoring,event  handling,
> and  management, alongwith  an NVMe target server.
>
> Each  of  these componentsis  restrictedto  its own CPU coresto  prevent
> mutual interferenceand  ensurestrict  isolation.To  achieve this level
> of  isolation for nvmet_wq we needto  use sysfs tunables such as
> cpumask that are currently not accessible.
>
> Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
> workqueue tunables are exported in the userspace via sysfs.
>
> with this patch :-
>
> nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
> affinity_scope  affinity_strict  cpumask  max_active  nice  per_cpu
> power  subsystem  uevent
>
> "
>
> With that looks good.
>
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
>
> -ck
>
Thanks for tunning the commit message, the new content looks good,

but I see some words are joined together, "coresto" -> "cores to",

"interferenceand" -> "interference and", and so on.

Please change this when you applying the patch.

Best Regards,

Guixin Liu



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] nvmet: make nvmet_wq visible in sysfs
  2024-10-31  6:38   ` Guixin Liu
@ 2024-10-31  6:50     ` Chaitanya Kulkarni
  2024-10-31  6:57       ` hch
  2024-10-31  7:00       ` Guixin Liu
  0 siblings, 2 replies; 7+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-31  6:50 UTC (permalink / raw)
  To: Guixin Liu; +Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me

On 10/30/24 23:38, Guixin Liu wrote:
> Thanks for tunning the commit message, the new content looks good,
>
> but I see some words are joined together, "coresto" -> "cores to",
>
> "interferenceand" -> "interference and", and so on.
>
> Please change this when you applying the patch.
>
> Best Regards,
>
> Guixin Liu 

Here is the updated one :-

In some complex scenarios, we deploy multiple tasks on a single machine
(hybrid deployment), such as Docker containers for function computation
(background processing), real-time tasks, monitoring, event handling,
and management, along with an NVMe target server.

Each of these components is restricted to its own CPU cores to prevent
mutual interference and ensure strict isolation. To achieve this level
of isolation for nvmet_wq we need to  use sysfs tunables such as
cpumask that are currently not accessible.

Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
workqueue tunables are exported in the userspace via sysfs.

with this patch :-

nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
affinity_scope  affinity_strict  cpumask  max_active  nice per_cpu
power  subsystem  uevent


-ck



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] nvmet: make nvmet_wq visible in sysfs
  2024-10-31  6:50     ` Chaitanya Kulkarni
@ 2024-10-31  6:57       ` hch
  2024-10-31  7:00       ` Guixin Liu
  1 sibling, 0 replies; 7+ messages in thread
From: hch @ 2024-10-31  6:57 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: Guixin Liu, linux-nvme@lists.infradead.org, hch@lst.de,
	sagi@grimberg.me

Looks good with the updated commit message:

Reviewed-by: Christoph Hellwig <hch@lst.de>



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] nvmet: make nvmet_wq visible in sysfs
  2024-10-31  6:50     ` Chaitanya Kulkarni
  2024-10-31  6:57       ` hch
@ 2024-10-31  7:00       ` Guixin Liu
  2024-11-05 16:36         ` Keith Busch
  1 sibling, 1 reply; 7+ messages in thread
From: Guixin Liu @ 2024-10-31  7:00 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me


在 2024/10/31 14:50, Chaitanya Kulkarni 写道:
> On 10/30/24 23:38, Guixin Liu wrote:
>> Thanks for tunning the commit message, the new content looks good,
>>
>> but I see some words are joined together, "coresto" -> "cores to",
>>
>> "interferenceand" -> "interference and", and so on.
>>
>> Please change this when you applying the patch.
>>
>> Best Regards,
>>
>> Guixin Liu
> Here is the updated one :-
>
> In some complex scenarios, we deploy multiple tasks on a single machine
> (hybrid deployment), such as Docker containers for function computation
> (background processing), real-time tasks, monitoring, event handling,
> and management, along with an NVMe target server.
>
> Each of these components is restricted to its own CPU cores to prevent
> mutual interference and ensure strict isolation. To achieve this level
> of isolation for nvmet_wq we need to  use sysfs tunables such as
> cpumask that are currently not accessible.
>
> Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
> workqueue tunables are exported in the userspace via sysfs.
>
> with this patch :-
>
> nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
> affinity_scope  affinity_strict  cpumask  max_active  nice per_cpu
> power  subsystem  uevent
>
>
> -ck
>
Looks good now, my deepest gratitude for the tunning.

Best Regards,

Guixin Liu



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] nvmet: make nvmet_wq visible in sysfs
  2024-10-31  7:00       ` Guixin Liu
@ 2024-11-05 16:36         ` Keith Busch
  0 siblings, 0 replies; 7+ messages in thread
From: Keith Busch @ 2024-11-05 16:36 UTC (permalink / raw)
  To: Guixin Liu
  Cc: Chaitanya Kulkarni, linux-nvme@lists.infradead.org, hch@lst.de,
	sagi@grimberg.me

On Thu, Oct 31, 2024 at 03:00:17PM +0800, Guixin Liu wrote:
> Looks good now, my deepest gratitude for the tunning.

Thanks all, applied to nvme-6.13 with the updated commit message.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-11-05 17:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-31  2:27 [PATCH v2] nvmet: make nvmet_wq visible in sysfs Guixin Liu
2024-10-31  6:23 ` Chaitanya Kulkarni
2024-10-31  6:38   ` Guixin Liu
2024-10-31  6:50     ` Chaitanya Kulkarni
2024-10-31  6:57       ` hch
2024-10-31  7:00       ` Guixin Liu
2024-11-05 16:36         ` Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox