* [PATCH blktests] nvme: test log page offsets
@ 2024-02-05 18:52 Keith Busch
2024-02-05 22:30 ` Chaitanya Kulkarni
2024-02-06 6:02 ` Shinichiro Kawasaki
0 siblings, 2 replies; 8+ messages in thread
From: Keith Busch @ 2024-02-05 18:52 UTC (permalink / raw)
To: linux-nvme, shinichiro.kawasaki; +Cc: Keith Busch
From: Keith Busch <kbusch@kernel.org>
I've encountered a device that fails catastrophically if the host
requests to an error log with a non-zero LPO. The fallout has been bad
enough to warrant a sanity check against this scenario.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
tests/nvme/051 | 26 ++++++++++++++++++++++++++
tests/nvme/051.out | 2 ++
2 files changed, 28 insertions(+)
create mode 100755 tests/nvme/051
create mode 100644 tests/nvme/051.out
diff --git a/tests/nvme/051 b/tests/nvme/051
new file mode 100755
index 0000000..ef30ad8
--- /dev/null
+++ b/tests/nvme/051
@@ -0,0 +1,26 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-3.0+
+# Copyright (C) 2024 Keith Busch
+
+. tests/nvme/rc
+
+DESCRIPTION="Tests device support for log page offsets"
+CAN_BE_ZONED=1
+QUICK=1
+
+requires() {
+ _nvme_requires
+}
+
+test_device() {
+ echo "Running ${TEST_NAME}"
+
+ lpa=$(sudo nvme id-ctrl "${TEST_DEV}" | grep lpa | cut -d":" -f 2 | xargs)
+ lpo=$((lpa & 0x4))
+
+ if [ $lpo -ne 0 ]; then
+ nvme get-log "${TEST_DEV}" --log-id=1 --log-len=128 --lpo=0x1000 > /dev/NULL
+ fi
+
+ echo "Test complete"
+}
diff --git a/tests/nvme/051.out b/tests/nvme/051.out
new file mode 100644
index 0000000..156f068
--- /dev/null
+++ b/tests/nvme/051.out
@@ -0,0 +1,2 @@
+Running nvme/051
+Test complete
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH blktests] nvme: test log page offsets
2024-02-05 18:52 [PATCH blktests] nvme: test log page offsets Keith Busch
@ 2024-02-05 22:30 ` Chaitanya Kulkarni
2024-02-06 6:02 ` Shinichiro Kawasaki
1 sibling, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2024-02-05 22:30 UTC (permalink / raw)
To: Keith Busch
Cc: Keith Busch, shinichiro.kawasaki@wdc.com,
linux-nvme@lists.infradead.org
On 2/5/24 10:52, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> I've encountered a device that fails catastrophically if the host
> requests to an error log with a non-zero LPO. The fallout has been bad
> enough to warrant a sanity check against this scenario.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
>
Looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH blktests] nvme: test log page offsets
2024-02-05 18:52 [PATCH blktests] nvme: test log page offsets Keith Busch
2024-02-05 22:30 ` Chaitanya Kulkarni
@ 2024-02-06 6:02 ` Shinichiro Kawasaki
2024-02-06 6:41 ` Keith Busch
1 sibling, 1 reply; 8+ messages in thread
From: Shinichiro Kawasaki @ 2024-02-06 6:02 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-nvme@lists.infradead.org, Keith Busch
Hi Keith, thanks for the patch.
On Feb 05, 2024 / 10:52, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> I've encountered a device that fails catastrophically if the host
> requests to an error log with a non-zero LPO. The fallout has been bad
> enough to warrant a sanity check against this scenario.
Question, which part of the kernel code does this test case cover? I'm wondering
if this test case might be testing NVMe devices rather than the kernel code.
Also, was there any related kernel code change or discussion? If so, I would
like to leave links to them in the commit message.
I ran this test case on my test system using QEMU NVME device, and saw it failed
with the message below.
nvme/051 => nvme0n1 (Tests device support for log page offsets) [failed]
runtime 0.104s ... 0.126s
--- tests/nvme/051.out 2024-02-06 09:46:03.522522896 +0900
+++ /home/shin/Blktests/blktests/results/nvme0n1/nvme/051.out.bad 2024-02-06 14:50:57.394105192 +0900
@@ -1,2 +1,3 @@
Running nvme/051
+NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002)
Test complete
I took a look in the latest QEMU code, and found it returns "Invalid Field in
Command" when the specified offset is larger than error log size in QEMU. Do I
miss anything to make this test case pass?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH blktests] nvme: test log page offsets
2024-02-06 6:02 ` Shinichiro Kawasaki
@ 2024-02-06 6:41 ` Keith Busch
2024-02-06 16:50 ` Keith Busch
2024-02-07 4:32 ` Shinichiro Kawasaki
0 siblings, 2 replies; 8+ messages in thread
From: Keith Busch @ 2024-02-06 6:41 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: Keith Busch, linux-nvme@lists.infradead.org
On Tue, Feb 06, 2024 at 06:02:24AM +0000, Shinichiro Kawasaki wrote:
> Hi Keith, thanks for the patch.
>
> On Feb 05, 2024 / 10:52, Keith Busch wrote:
> > From: Keith Busch <kbusch@kernel.org>
> >
> > I've encountered a device that fails catastrophically if the host
> > requests to an error log with a non-zero LPO. The fallout has been bad
> > enough to warrant a sanity check against this scenario.
>
> Question, which part of the kernel code does this test case cover? I'm wondering
> if this test case might be testing NVMe devices rather than the kernel code.
This is definitely a device-side focused test. This isn't really
exercising particularly interesting parts of the kernel/driver that are
not already thoroughly hit with other tests.
> Also, was there any related kernel code change or discussion? If so, I would
> like to leave links to them in the commit message.
Not a kernel change, but a tooling change. smartmontools, particularly
the minor update package from Redhat, version 7.1-3, introduced a change
to split large logs into multiple commands. The device doesn't just
break itself in this scenario: every non-posted PCIe transaction times
out after it sees an error log command with a non-zero offset that AER
handing fails to recover, taking servers offline with it. A truly
spectacular cascade of failure from a seemingly benign test case.
FWIW, while the device handling is completely wrong, this log has read
side effects, so an application reading non-zero offsets is pretty much
nonsense (and unfortunately nvme-cli does similiar things too). I've
filed a bug report on smartmontools' github. I believe someone else will
be taking this up with Redhat because the backports from 7.2 to the
minor 7.1-3 update are wrong anyway.
> I ran this test case on my test system using QEMU NVME device, and saw it failed
> with the message below.
>
> nvme/051 => nvme0n1 (Tests device support for log page offsets) [failed]
> runtime 0.104s ... 0.126s
> --- tests/nvme/051.out 2024-02-06 09:46:03.522522896 +0900
> +++ /home/shin/Blktests/blktests/results/nvme0n1/nvme/051.out.bad 2024-02-06 14:50:57.394105192 +0900
> @@ -1,2 +1,3 @@
> Running nvme/051
> +NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002)
> Test complete
>
> I took a look in the latest QEMU code, and found it returns "Invalid Field in
> Command" when the specified offset is larger than error log size in QEMU. Do I
> miss anything to make this test case pass?
Ah, good catch. I don't want to fail the test if the device correctly
reports an error, so I'd need to redirect 2>&1. Success for this test is
just if the command completes at all with any status.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH blktests] nvme: test log page offsets
2024-02-06 6:41 ` Keith Busch
@ 2024-02-06 16:50 ` Keith Busch
2024-02-07 4:32 ` Shinichiro Kawasaki
1 sibling, 0 replies; 8+ messages in thread
From: Keith Busch @ 2024-02-06 16:50 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: Keith Busch, linux-nvme@lists.infradead.org
On Mon, Feb 05, 2024 at 10:41:03PM -0800, Keith Busch wrote:
>
> FWIW, while the device handling is completely wrong, this log has read
> side effects, so an application reading non-zero offsets is pretty much
> nonsense (and unfortunately nvme-cli does similiar things too).
Ah, please ignore this statement. Error logs are just temporal before
dropping off. I confused myself with a different log with that behavior.
Yesterday was a long day...
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH blktests] nvme: test log page offsets
2024-02-06 6:41 ` Keith Busch
2024-02-06 16:50 ` Keith Busch
@ 2024-02-07 4:32 ` Shinichiro Kawasaki
2024-02-07 5:19 ` Chaitanya Kulkarni
2024-02-07 5:52 ` Keith Busch
1 sibling, 2 replies; 8+ messages in thread
From: Shinichiro Kawasaki @ 2024-02-07 4:32 UTC (permalink / raw)
To: Keith Busch; +Cc: Keith Busch, linux-nvme@lists.infradead.org
On Feb 05, 2024 / 22:41, Keith Busch wrote:
> On Tue, Feb 06, 2024 at 06:02:24AM +0000, Shinichiro Kawasaki wrote:
> > Hi Keith, thanks for the patch.
> >
> > On Feb 05, 2024 / 10:52, Keith Busch wrote:
> > > From: Keith Busch <kbusch@kernel.org>
> > >
> > > I've encountered a device that fails catastrophically if the host
> > > requests to an error log with a non-zero LPO. The fallout has been bad
> > > enough to warrant a sanity check against this scenario.
> >
> > Question, which part of the kernel code does this test case cover? I'm wondering
> > if this test case might be testing NVMe devices rather than the kernel code.
>
> This is definitely a device-side focused test. This isn't really
> exercising particularly interesting parts of the kernel/driver that are
> not already thoroughly hit with other tests.
>
> > Also, was there any related kernel code change or discussion? If so, I would
> > like to leave links to them in the commit message.
>
> Not a kernel change, but a tooling change. smartmontools, particularly
> the minor update package from Redhat, version 7.1-3, introduced a change
> to split large logs into multiple commands. The device doesn't just
> break itself in this scenario: every non-posted PCIe transaction times
> out after it sees an error log command with a non-zero offset that AER
> handing fails to recover, taking servers offline with it. A truly
> spectacular cascade of failure from a seemingly benign test case.
Thanks for the explanations. The issue sounds nasty. IIUC, the motivation to
add this test case is to catch the device issue early and to not lose
developer's precious time to debug it again.
Having said that, this test case will be the first test case to test *devices*,
and it will extend the role of blktests. So far, all blktests test cases are
intended to test kernel/driver *code*.
With this background, I have two questions in my mind:
Q1) What is the expected action to take when blktests users observe failure of
the test case? Report to linux-block or linux-nvme? Or report to the
device manufacturer?
Q2) When a new nvme driver patch needs check before submit, is this test case
worth running?
I guess the answer of Q1 is 'Report to the device manufacturer' and the answer
of Q2 is 'No'. It would be the better to clarify these points with some more
descriptions in the comment block of the test case.
Another idea is to separate the test case from the nvme group to a new group
called "nvmedev" or "devcompat", which is dedicated to check that the storage
devices work good with Linux storage stack. It will make it easier for the
blktests users to understand the action to take, and which test group to run.
What do you think?
...
>
> > I ran this test case on my test system using QEMU NVME device, and saw it failed
> > with the message below.
> >
> > nvme/051 => nvme0n1 (Tests device support for log page offsets) [failed]
> > runtime 0.104s ... 0.126s
> > --- tests/nvme/051.out 2024-02-06 09:46:03.522522896 +0900
> > +++ /home/shin/Blktests/blktests/results/nvme0n1/nvme/051.out.bad 2024-02-06 14:50:57.394105192 +0900
> > @@ -1,2 +1,3 @@
> > Running nvme/051
> > +NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002)
> > Test complete
> >
> > I took a look in the latest QEMU code, and found it returns "Invalid Field in
> > Command" when the specified offset is larger than error log size in QEMU. Do I
> > miss anything to make this test case pass?
>
> Ah, good catch. I don't want to fail the test if the device correctly
> reports an error, so I'd need to redirect 2>&1. Success for this test is
> just if the command completes at all with any status.
Got it, thanks for the clarification.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH blktests] nvme: test log page offsets
2024-02-07 4:32 ` Shinichiro Kawasaki
@ 2024-02-07 5:19 ` Chaitanya Kulkarni
2024-02-07 5:52 ` Keith Busch
1 sibling, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2024-02-07 5:19 UTC (permalink / raw)
To: Shinichiro Kawasaki, Keith Busch
Cc: Keith Busch, linux-nvme@lists.infradead.org
> Another idea is to separate the test case from the nvme group to a new group
> called "nvmedev" or "devcompat", which is dedicated to check that the storage
> devices work good with Linux storage stack. It will make it easier for the
> blktests users to understand the action to take, and which test group to run.
> What do you think?
>
>
apart from test being useful, above suggestions will streamline handling of
any testcase that falls into into device specific category, yet
maintaining the
flow of the exiting infrastructure ..
-ck
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH blktests] nvme: test log page offsets
2024-02-07 4:32 ` Shinichiro Kawasaki
2024-02-07 5:19 ` Chaitanya Kulkarni
@ 2024-02-07 5:52 ` Keith Busch
1 sibling, 0 replies; 8+ messages in thread
From: Keith Busch @ 2024-02-07 5:52 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: Keith Busch, linux-nvme@lists.infradead.org
On Wed, Feb 07, 2024 at 04:32:02AM +0000, Shinichiro Kawasaki wrote:
> Thanks for the explanations. The issue sounds nasty. IIUC, the motivation to
> add this test case is to catch the device issue early and to not lose
> developer's precious time to debug it again.
>
> Having said that, this test case will be the first test case to test *devices*,
> and it will extend the role of blktests. So far, all blktests test cases are
> intended to test kernel/driver *code*.
>
> With this background, I have two questions in my mind:
>
> Q1) What is the expected action to take when blktests users observe failure of
> the test case? Report to linux-block or linux-nvme? Or report to the
> device manufacturer?
>
> Q2) When a new nvme driver patch needs check before submit, is this test case
> worth running?
>
> I guess the answer of Q1 is 'Report to the device manufacturer' and the answer
> of Q2 is 'No'. It would be the better to clarify these points with some more
> descriptions in the comment block of the test case.
>
> Another idea is to separate the test case from the nvme group to a new group
> called "nvmedev" or "devcompat", which is dedicated to check that the storage
> devices work good with Linux storage stack. It will make it easier for the
> blktests users to understand the action to take, and which test group to run.
> What do you think?
Splitting out device specific sanity tests sounds fine to me. There are
various suites for interop and spec compliance, but I don't know of any
good open source repos providing that.
And instead of end-users running the tests, I imagine vendors could
include a test run ahead of a product release or drive qualification.
Though to be honest, I think this patch is a bit silly to start off that
kind of ambitious project. I'll check with other potential parties to
see if there's interest in collaborating on something more complete.
I'm not sure how well existing device tests could fit in the blktests
framework, but maybe it could work.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-02-07 5:52 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-05 18:52 [PATCH blktests] nvme: test log page offsets Keith Busch
2024-02-05 22:30 ` Chaitanya Kulkarni
2024-02-06 6:02 ` Shinichiro Kawasaki
2024-02-06 6:41 ` Keith Busch
2024-02-06 16:50 ` Keith Busch
2024-02-07 4:32 ` Shinichiro Kawasaki
2024-02-07 5:19 ` Chaitanya Kulkarni
2024-02-07 5:52 ` Keith Busch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox