* [PATCH] nvmet: make nvmet_wq visible in sysfs
@ 2024-10-29 1:49 Guixin Liu
2024-10-29 5:04 ` Chaitanya Kulkarni
0 siblings, 1 reply; 15+ messages in thread
From: Guixin Liu @ 2024-10-29 1:49 UTC (permalink / raw)
To: hch, sagi, kch; +Cc: linux-nvme
Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
through sysfs.
Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
---
drivers/nvme/target/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index ed2424f8a396..15b25f464e77 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1717,7 +1717,7 @@ static int __init nvmet_init(void)
goto out_free_zbd_work_queue;
nvmet_wq = alloc_workqueue("nvmet-wq",
- WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
+ WQ_MEM_RECLAIM | WQ_UNBOUND | WQ_SYSFS, 0);
if (!nvmet_wq)
goto out_free_buffered_work_queue;
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-29 1:49 [PATCH] nvmet: make nvmet_wq visible in sysfs Guixin Liu
@ 2024-10-29 5:04 ` Chaitanya Kulkarni
2024-10-29 6:46 ` Guixin Liu
0 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-29 5:04 UTC (permalink / raw)
To: Guixin Liu
Cc: linux-nvme@lists.infradead.org, hch@lst.de, Chaitanya Kulkarni,
sagi@grimberg.me
On 10/28/24 18:49, Guixin Liu wrote:
> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
> through sysfs.
>
> Signed-off-by: Guixin Liu<kanie@linux.alibaba.com>
> ---
do you happened have a usecase for this?
-ck
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-29 5:04 ` Chaitanya Kulkarni
@ 2024-10-29 6:46 ` Guixin Liu
2024-10-29 19:52 ` Chaitanya Kulkarni
0 siblings, 1 reply; 15+ messages in thread
From: Guixin Liu @ 2024-10-29 6:46 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
在 2024/10/29 13:04, Chaitanya Kulkarni 写道:
> On 10/28/24 18:49, Guixin Liu wrote:
>> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
>> through sysfs.
>>
>> Signed-off-by: Guixin Liu<kanie@linux.alibaba.com>
>> ---
> do you happened have a usecase for this?
>
> -ck
Sometimes, in order to respond promptly to certain events or
manage commands, we need to reserve resources and partition
the CPU cores. For example, if there are 4 cores available,
we can initially allocate them by dedicating one core for
management while the remaining 3 cores are specifically for handling IO.
Best Regards,
Guixin Liu
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-29 6:46 ` Guixin Liu
@ 2024-10-29 19:52 ` Chaitanya Kulkarni
2024-10-30 0:49 ` Chaitanya Kulkarni
2024-10-30 1:44 ` Guixin Liu
0 siblings, 2 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-29 19:52 UTC (permalink / raw)
To: Guixin Liu; +Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
On 10/28/24 23:46, Guixin Liu wrote:
>
> 在 2024/10/29 13:04, Chaitanya Kulkarni 写道:
>> On 10/28/24 18:49, Guixin Liu wrote:
>>> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
>>> through sysfs.
>>>
>>> Signed-off-by: Guixin Liu<kanie@linux.alibaba.com>
>>> ---
>> do you happened have a usecase for this?
>>
>> -ck
>
> Sometimes, in order to respond promptly to certain events or
>
> manage commands, we need to reserve resources and partition
>
> the CPU cores. For example, if there are 4 cores available,
>
> we can initially allocate them by dedicating one core for
>
> management while the remaining 3 cores are specifically for handling IO.
>
> Best Regards,
>
> Guixin Liu
>
I'm aware of exposing tunables through sysfs and it's benefits, my question
was do you have a setup where this setting is needed currently ?
I've always been asked to for the usecase on a patch when we expose
something
out of kernel that is solving the problem in the deployment ...
-ck
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-29 19:52 ` Chaitanya Kulkarni
@ 2024-10-30 0:49 ` Chaitanya Kulkarni
2024-10-30 1:44 ` Guixin Liu
1 sibling, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-30 0:49 UTC (permalink / raw)
To: Guixin Liu; +Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
On 10/29/24 12:52, Chaitanya Kulkarni wrote:
> On 10/28/24 23:46, Guixin Liu wrote:
>> 在 2024/10/29 13:04, Chaitanya Kulkarni 写道:
>>> On 10/28/24 18:49, Guixin Liu wrote:
>>>> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
>>>> through sysfs.
>>>>
>>>> Signed-off-by: Guixin Liu<kanie@linux.alibaba.com>
>>>> ---
>>> do you happened have a usecase for this?
>>>
>>> -ck
>> Sometimes, in order to respond promptly to certain events or
>>
>> manage commands, we need to reserve resources and partition
>>
>> the CPU cores. For example, if there are 4 cores available,
>>
>> we can initially allocate them by dedicating one core for
>>
>> management while the remaining 3 cores are specifically for handling IO.
>>
>> Best Regards,
>>
>> Guixin Liu
>>
> I'm aware of exposing tunables through sysfs and it's benefits, my question
> was do you have a setup where this setting is needed currently ?
>
> I've always been asked to for the usecase on a patch when we expose
> something
> out of kernel that is solving the problem in the deployment ...
>
> -ck
>
>
If others are okay then :-
Looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-29 19:52 ` Chaitanya Kulkarni
2024-10-30 0:49 ` Chaitanya Kulkarni
@ 2024-10-30 1:44 ` Guixin Liu
2024-10-30 5:53 ` hch
2024-10-30 6:33 ` Chaitanya Kulkarni
1 sibling, 2 replies; 15+ messages in thread
From: Guixin Liu @ 2024-10-30 1:44 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
在 2024/10/30 03:52, Chaitanya Kulkarni 写道:
> On 10/28/24 23:46, Guixin Liu wrote:
>> 在 2024/10/29 13:04, Chaitanya Kulkarni 写道:
>>> On 10/28/24 18:49, Guixin Liu wrote:
>>>> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
>>>> through sysfs.
>>>>
>>>> Signed-off-by: Guixin Liu<kanie@linux.alibaba.com>
>>>> ---
>>> do you happened have a usecase for this?
>>>
>>> -ck
>> Sometimes, in order to respond promptly to certain events or
>>
>> manage commands, we need to reserve resources and partition
>>
>> the CPU cores. For example, if there are 4 cores available,
>>
>> we can initially allocate them by dedicating one core for
>>
>> management while the remaining 3 cores are specifically for handling IO.
>>
>> Best Regards,
>>
>> Guixin Liu
>>
> I'm aware of exposing tunables through sysfs and it's benefits, my question
> was do you have a setup where this setting is needed currently ?
>
> I've always been asked to for the usecase on a patch when we expose
> something
> out of kernel that is solving the problem in the deployment ...
>
> -ck
I need serverve some cpu core to do other things, such as handle events
and managements, so that the nvmet_wq can't running on all cpu cores,
currently, I restrict it by setting the cpumask of nvmet_wq(that's why
I expose nvmet_wq to sysfs).
Best Regards,
Guixin Liu
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-30 1:44 ` Guixin Liu
@ 2024-10-30 5:53 ` hch
2024-10-30 6:44 ` Guixin Liu
2024-10-30 6:33 ` Chaitanya Kulkarni
1 sibling, 1 reply; 15+ messages in thread
From: hch @ 2024-10-30 5:53 UTC (permalink / raw)
To: Guixin Liu
Cc: Chaitanya Kulkarni, linux-nvme@lists.infradead.org, hch@lst.de,
sagi@grimberg.me
On Wed, Oct 30, 2024 at 09:44:25AM +0800, Guixin Liu wrote:
> I need serverve some cpu core to do other things, such as handle events
>
> and managements, so that the nvmet_wq can't running on all cpu cores,
>
> currently, I restrict it by setting the cpumask of nvmet_wq(that's why
>
> I expose nvmet_wq to sysfs).
Can you resend the patch with an explanation of the use case in the
commit message? Thanks!
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-30 1:44 ` Guixin Liu
2024-10-30 5:53 ` hch
@ 2024-10-30 6:33 ` Chaitanya Kulkarni
2024-10-30 11:20 ` Guixin Liu
1 sibling, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-30 6:33 UTC (permalink / raw)
To: Guixin Liu; +Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
On 10/29/24 18:44, Guixin Liu wrote:
>
> 在 2024/10/30 03:52, Chaitanya Kulkarni 写道:
>> On 10/28/24 23:46, Guixin Liu wrote:
>>> 在 2024/10/29 13:04, Chaitanya Kulkarni 写道:
>>>> On 10/28/24 18:49, Guixin Liu wrote:
>>>>> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
>>>>> through sysfs.
>>>>>
>>>>> Signed-off-by: Guixin Liu<kanie@linux.alibaba.com>
>>>>> ---
>>>> do you happened have a usecase for this?
>>>>
>>>> -ck
>>> Sometimes, in order to respond promptly to certain events or
>>>
>>> manage commands, we need to reserve resources and partition
>>>
>>> the CPU cores. For example, if there are 4 cores available,
>>>
>>> we can initially allocate them by dedicating one core for
>>>
>>> management while the remaining 3 cores are specifically for handling
>>> IO.
>>>
>>> Best Regards,
>>>
>>> Guixin Liu
>>>
>> I'm aware of exposing tunables through sysfs and it's benefits, my
>> question
>> was do you have a setup where this setting is needed currently ?
>>
>> I've always been asked to for the usecase on a patch when we expose
>> something
>> out of kernel that is solving the problem in the deployment ...
>>
>> -ck
>
> I need serverve some cpu core to do other things, such as handle events
>
> and managements, so that the nvmet_wq can't running on all cpu cores,
>
> currently, I restrict it by setting the cpumask of nvmet_wq(that's why
>
> I expose nvmet_wq to sysfs).
>
> Best Regards,
>
> Guixin Liu
>
>>
Can you please explain your setup ? e.g. transport tcp/rdma/fc, device
backend file/block etc ?
so nvmet_wq's CPU consumption is so high, that it doesn't have bandwidth
to handle events and managements ?
Can you please explain the workload and what kind of events and managements
handling is needed where you need to restrict the nvmet_wq with CPUMASK ?
The only reason I'm asking that I've not seen this scenario so far in
the many
many deployments since we've added the nvmet_wq and I'd really like to learn
about scenario.
-ck
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-30 5:53 ` hch
@ 2024-10-30 6:44 ` Guixin Liu
0 siblings, 0 replies; 15+ messages in thread
From: Guixin Liu @ 2024-10-30 6:44 UTC (permalink / raw)
To: hch@lst.de
Cc: Chaitanya Kulkarni, linux-nvme@lists.infradead.org,
sagi@grimberg.me
在 2024/10/30 13:53, hch@lst.de 写道:
> On Wed, Oct 30, 2024 at 09:44:25AM +0800, Guixin Liu wrote:
>> I need serverve some cpu core to do other things, such as handle events
>>
>> and managements, so that the nvmet_wq can't running on all cpu cores,
>>
>> currently, I restrict it by setting the cpumask of nvmet_wq(that's why
>>
>> I expose nvmet_wq to sysfs).
> Can you resend the patch with an explanation of the use case in the
> commit message? Thanks!
Sure.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-30 6:33 ` Chaitanya Kulkarni
@ 2024-10-30 11:20 ` Guixin Liu
2024-10-30 18:38 ` Chaitanya Kulkarni
0 siblings, 1 reply; 15+ messages in thread
From: Guixin Liu @ 2024-10-30 11:20 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
在 2024/10/30 14:33, Chaitanya Kulkarni 写道:
> On 10/29/24 18:44, Guixin Liu wrote:
>> 在 2024/10/30 03:52, Chaitanya Kulkarni 写道:
>>> On 10/28/24 23:46, Guixin Liu wrote:
>>>> 在 2024/10/29 13:04, Chaitanya Kulkarni 写道:
>>>>> On 10/28/24 18:49, Guixin Liu wrote:
>>>>>> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
>>>>>> through sysfs.
>>>>>>
>>>>>> Signed-off-by: Guixin Liu<kanie@linux.alibaba.com>
>>>>>> ---
>>>>> do you happened have a usecase for this?
>>>>>
>>>>> -ck
>>>> Sometimes, in order to respond promptly to certain events or
>>>>
>>>> manage commands, we need to reserve resources and partition
>>>>
>>>> the CPU cores. For example, if there are 4 cores available,
>>>>
>>>> we can initially allocate them by dedicating one core for
>>>>
>>>> management while the remaining 3 cores are specifically for handling
>>>> IO.
>>>>
>>>> Best Regards,
>>>>
>>>> Guixin Liu
>>>>
>>> I'm aware of exposing tunables through sysfs and it's benefits, my
>>> question
>>> was do you have a setup where this setting is needed currently ?
>>>
>>> I've always been asked to for the usecase on a patch when we expose
>>> something
>>> out of kernel that is solving the problem in the deployment ...
>>>
>>> -ck
>> I need serverve some cpu core to do other things, such as handle events
>>
>> and managements, so that the nvmet_wq can't running on all cpu cores,
>>
>> currently, I restrict it by setting the cpumask of nvmet_wq(that's why
>>
>> I expose nvmet_wq to sysfs).
>>
>> Best Regards,
>>
>> Guixin Liu
>>
> Can you please explain your setup ? e.g. transport tcp/rdma/fc, device
> backend file/block etc ?
>
> so nvmet_wq's CPU consumption is so high, that it doesn't have bandwidth
> to handle events and managements ?
>
> Can you please explain the workload and what kind of events and managements
> handling is needed where you need to restrict the nvmet_wq with CPUMASK ?
>
> The only reason I'm asking that I've not seen this scenario so far in
> the many
> many deployments since we've added the nvmet_wq and I'd really like to learn
> about scenario.
>
> -ck
Sorry for the unclear explanation.
The transport is tcp and the backend is block.
This is just a solution level thing, in some complicated scenarios, we
deploy multiple
missions on one machine(hybrid deployment), such as:
1. Dockers for function computation.
2. Real-time tasks.
3. Monitoring, and handling events and managemets.
4. And also nvme target server.
All of them are retrict to their own cpu cores to prevent mutual influence.
There is no problem if nvmet_wq running on all cpus of course, but for
strict isolation,
we need to do this retriction.
I dont know if I've given enough detail.
Best Regards,
Guixin Liu
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-30 11:20 ` Guixin Liu
@ 2024-10-30 18:38 ` Chaitanya Kulkarni
2024-10-31 2:01 ` Guixin Liu
0 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-30 18:38 UTC (permalink / raw)
To: Guixin Liu; +Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
On 10/30/24 04:20, Guixin Liu wrote:
>
> Sorry for the unclear explanation.
>
> The transport is tcp and the backend is block.
>
> This is just a solution level thing, in some complicated scenarios, we
> deploy multiple
>
> missions on one machine(hybrid deployment), such as:
>
> 1. Dockers for function computation.
>
> 2. Real-time tasks.
>
> 3. Monitoring, and handling events and managemets.
>
> 4. And also nvme target server.
>
> All of them are retrict to their own cpu cores to prevent mutual
> influence.
>
> There is no problem if nvmet_wq running on all cpus of course, but for
> strict isolation,
>
> we need to do this retriction.
>
> I dont know if I've given enough detail.
>
> Best Regards,
>
> Guixin Liu
can you please send a patch with detailed usecase ?
Also, it will be nice (not a blocker to merge this patch) if you can
provide
steps similar to listed above so we can get this scenario tested, even
better
if you can submit a block test, but if not I'll send one once I get steps.
-ck
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-30 18:38 ` Chaitanya Kulkarni
@ 2024-10-31 2:01 ` Guixin Liu
2024-10-31 2:45 ` Chaitanya Kulkarni
0 siblings, 1 reply; 15+ messages in thread
From: Guixin Liu @ 2024-10-31 2:01 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
在 2024/10/31 02:38, Chaitanya Kulkarni 写道:
> On 10/30/24 04:20, Guixin Liu wrote:
>> Sorry for the unclear explanation.
>>
>> The transport is tcp and the backend is block.
>>
>> This is just a solution level thing, in some complicated scenarios, we
>> deploy multiple
>>
>> missions on one machine(hybrid deployment), such as:
>>
>> 1. Dockers for function computation.
>>
>> 2. Real-time tasks.
>>
>> 3. Monitoring, and handling events and managemets.
>>
>> 4. And also nvme target server.
>>
>> All of them are retrict to their own cpu cores to prevent mutual
>> influence.
>>
>> There is no problem if nvmet_wq running on all cpus of course, but for
>> strict isolation,
>>
>> we need to do this retriction.
>>
>> I dont know if I've given enough detail.
>>
>> Best Regards,
>>
>> Guixin Liu
> can you please send a patch with detailed usecase ?
>
> Also, it will be nice (not a blocker to merge this patch) if you can
> provide
> steps similar to listed above so we can get this scenario tested, even
> better
> if you can submit a block test, but if not I'll send one once I get steps.
>
> -ck
I will send the v2 with our usecase to expain why we should restrict the
cpumask,
I'm concerned whether blktests can handle such complex tests, as it
relies on deploying
many Docker containers and services. Should it only test the case of
setting the cpumask
with fio?
Best Regards,
Guixin Liu
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-31 2:01 ` Guixin Liu
@ 2024-10-31 2:45 ` Chaitanya Kulkarni
2024-10-31 6:39 ` Chaitanya Kulkarni
0 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-31 2:45 UTC (permalink / raw)
To: Guixin Liu; +Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
> On Oct 30, 2024, at 7:01 PM, Guixin Liu <kanie@linux.alibaba.com> wrote:
>
>
>> 在 2024/10/31 02:38, Chaitanya Kulkarni 写道:
>>> On 10/30/24 04:20, Guixin Liu wrote:
>>> Sorry for the unclear explanation.
>>>
>>> The transport is tcp and the backend is block.
>>>
>>> This is just a solution level thing, in some complicated scenarios, we
>>> deploy multiple
>>>
>>> missions on one machine(hybrid deployment), such as:
>>>
>>> 1. Dockers for function computation.
>>>
>>> 2. Real-time tasks.
>>>
>>> 3. Monitoring, and handling events and managemets.
>>>
>>> 4. And also nvme target server.
>>>
>>> All of them are retrict to their own cpu cores to prevent mutual
>>> influence.
>>>
>>> There is no problem if nvmet_wq running on all cpus of course, but for
>>> strict isolation,
>>>
>>> we need to do this retriction.
>>>
>>> I dont know if I've given enough detail.
>>>
>>> Best Regards,
>>>
>>> Guixin Liu
>> can you please send a patch with detailed usecase ?
>>
>> Also, it will be nice (not a blocker to merge this patch) if you can
>> provide
>> steps similar to listed above so we can get this scenario tested, even
>> better
>> if you can submit a block test, but if not I'll send one once I get steps.
>>
>> -ck
>
> I will send the v2 with our usecase to expain why we should restrict the cpumask,
>
> I'm concerned whether blktests can handle such complex tests, as it relies on deploying
>
> many Docker containers and services. Should it only test the case of setting the cpumask
>
> with fio?
>
> Best Regards,
>
> Guixin Liu
>
For now just cpumask and fio is sufficient so that when we upstream this patch we have some sort of testing done via sysfs.
-ck
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-31 2:45 ` Chaitanya Kulkarni
@ 2024-10-31 6:39 ` Chaitanya Kulkarni
2024-10-31 6:55 ` Guixin Liu
0 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-31 6:39 UTC (permalink / raw)
To: Guixin Liu; +Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
On 10/30/24 19:45, Chaitanya Kulkarni wrote:
>> I will send the v2 with our usecase to expain why we should restrict the cpumask,
>>
>> I'm concerned whether blktests can handle such complex tests, as it relies on deploying
>>
>> many Docker containers and services. Should it only test the case of setting the cpumask
>>
>> with fio?
>>
>> Best Regards,
>>
>> Guixin Liu
>>
> For now just cpumask and fio is sufficient so that when we upstream this patch we have some sort of testing done via sysfs.
>
>
> -ck
based on my very limited understanding I've written a rough blktest
for your patch see if this helps, it's bit rough and totally
untested :-
#!/bin/bash
# SPDX-License-Identifier: GPL-3.0+
# Description: Test `nvmet_wq` cpumask sysfs attribute with NVMe-oF and
fio workload
. tests/nvme/rc
DESCRIPTION="Test nvmet_wq cpumask sysfs attribute and verify with fio
on NVMe-oF device"
requires() {
_nvme_requires
_have_fio
_require_nvme_trtype_is_fabrics
}
test() {
local cpumask_path="/sys/devices/virtual/workqueue/nvmet_wq/cpumask"
# Check if cpumask attribute exists
if [[ ! -f "$cpumask_path" ]]; then
SKIP_REASONS+=("nvmet_wq cpumask sysfs attribute not found.")
return 1
fi
# Save original cpumask value
local original_cpumask
original_cpumask=$(cat "$cpumask_path")
echo "Original cpumask: $original_cpumask"
# Set a new cpumask (e.g., CPU 0)
echo 1 | tee "$cpumask_path" > /dev/null
local new_cpumask
new_cpumask=$(cat "$cpumask_path")
if [[ "$new_cpumask" != "1" ]]; then
echo "Test Failed: cpumask was not set correctly"
return 1
else
echo "Test Passed: cpumask set to $new_cpumask"
fi
# Set up NVMe-over-Fabrics target
echo "Setting up NVMe-oF target"
_setup_nvmet
_nvmet_target_setup
_nvme_connect_subsys
# Locate the NVMe-oF namespace
local ns
ns=$(_find_nvme_ns "${def_subsys_uuid}")
# Run fio with data verification on NVMe-oF device
echo "Starting fio workload with verification on NVMe-oF device"
fio --name=nvmet_wq_test --filename="/dev/$ns" --direct=1
--rw=randwrite \
--bs=4k --size=100M --numjobs=1 --verify=crc32c --verify_fatal=1 \
--time_based --runtime=30s --iodepth=16 --ioengine=libaio
--group_reporting
# Disconnect and clean up NVMe-oF target
echo "Cleaning up NVMe-oF setup"
_nvme_disconnect_subsys
_nvmet_target_cleanup
# Restore original cpumask
echo "$original_cpumask" | tee "$cpumask_path" > /dev/null
restored_cpumask=$(cat "$cpumask_path")
if [[ "$restored_cpumask" != "$original_cpumask" ]]; then
echo "Failed to restore original cpumask."
return 1
else
echo "Original cpumask restored successfully."
fi
}
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] nvmet: make nvmet_wq visible in sysfs
2024-10-31 6:39 ` Chaitanya Kulkarni
@ 2024-10-31 6:55 ` Guixin Liu
0 siblings, 0 replies; 15+ messages in thread
From: Guixin Liu @ 2024-10-31 6:55 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me
在 2024/10/31 14:39, Chaitanya Kulkarni 写道:
> On 10/30/24 19:45, Chaitanya Kulkarni wrote:
>>> I will send the v2 with our usecase to expain why we should restrict the cpumask,
>>>
>>> I'm concerned whether blktests can handle such complex tests, as it relies on deploying
>>>
>>> many Docker containers and services. Should it only test the case of setting the cpumask
>>>
>>> with fio?
>>>
>>> Best Regards,
>>>
>>> Guixin Liu
>>>
>> For now just cpumask and fio is sufficient so that when we upstream this patch we have some sort of testing done via sysfs.
>>
>>
>> -ck
> based on my very limited understanding I've written a rough blktest
> for your patch see if this helps, it's bit rough and totally
> untested :-
>
> #!/bin/bash
> # SPDX-License-Identifier: GPL-3.0+
> # Description: Test `nvmet_wq` cpumask sysfs attribute with NVMe-oF and
> fio workload
>
> . tests/nvme/rc
>
> DESCRIPTION="Test nvmet_wq cpumask sysfs attribute and verify with fio
> on NVMe-oF device"
>
> requires() {
> _nvme_requires
> _have_fio
> _require_nvme_trtype_is_fabrics
> }
>
> test() {
> local cpumask_path="/sys/devices/virtual/workqueue/nvmet_wq/cpumask"
>
> # Check if cpumask attribute exists
> if [[ ! -f "$cpumask_path" ]]; then
> SKIP_REASONS+=("nvmet_wq cpumask sysfs attribute not found.")
> return 1
> fi
>
> # Save original cpumask value
> local original_cpumask
> original_cpumask=$(cat "$cpumask_path")
> echo "Original cpumask: $original_cpumask"
>
> # Set a new cpumask (e.g., CPU 0)
> echo 1 | tee "$cpumask_path" > /dev/null
> local new_cpumask
> new_cpumask=$(cat "$cpumask_path")
>
> if [[ "$new_cpumask" != "1" ]]; then
> echo "Test Failed: cpumask was not set correctly"
> return 1
> else
> echo "Test Passed: cpumask set to $new_cpumask"
> fi
>
> # Set up NVMe-over-Fabrics target
> echo "Setting up NVMe-oF target"
> _setup_nvmet
> _nvmet_target_setup
> _nvme_connect_subsys
>
> # Locate the NVMe-oF namespace
> local ns
> ns=$(_find_nvme_ns "${def_subsys_uuid}")
>
> # Run fio with data verification on NVMe-oF device
> echo "Starting fio workload with verification on NVMe-oF device"
> fio --name=nvmet_wq_test --filename="/dev/$ns" --direct=1
> --rw=randwrite \
> --bs=4k --size=100M --numjobs=1 --verify=crc32c --verify_fatal=1 \
> --time_based --runtime=30s --iodepth=16 --ioengine=libaio
> --group_reporting
>
> # Disconnect and clean up NVMe-oF target
> echo "Cleaning up NVMe-oF setup"
> _nvme_disconnect_subsys
> _nvmet_target_cleanup
>
> # Restore original cpumask
> echo "$original_cpumask" | tee "$cpumask_path" > /dev/null
> restored_cpumask=$(cat "$cpumask_path")
>
> if [[ "$restored_cpumask" != "$original_cpumask" ]]; then
> echo "Failed to restore original cpumask."
> return 1
> else
> echo "Original cpumask restored successfully."
> fi
> }
The script looks good, thank you for adding the new test to my patch.
Best Regards,
Guixin Liu
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-10-31 6:55 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-29 1:49 [PATCH] nvmet: make nvmet_wq visible in sysfs Guixin Liu
2024-10-29 5:04 ` Chaitanya Kulkarni
2024-10-29 6:46 ` Guixin Liu
2024-10-29 19:52 ` Chaitanya Kulkarni
2024-10-30 0:49 ` Chaitanya Kulkarni
2024-10-30 1:44 ` Guixin Liu
2024-10-30 5:53 ` hch
2024-10-30 6:44 ` Guixin Liu
2024-10-30 6:33 ` Chaitanya Kulkarni
2024-10-30 11:20 ` Guixin Liu
2024-10-30 18:38 ` Chaitanya Kulkarni
2024-10-31 2:01 ` Guixin Liu
2024-10-31 2:45 ` Chaitanya Kulkarni
2024-10-31 6:39 ` Chaitanya Kulkarni
2024-10-31 6:55 ` Guixin Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox