Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH]nvme-pci: Fixes EEH failure on ppc
@ 2018-02-05 21:49 wenxiong
  2018-02-06  9:54 ` Sagi Grimberg
  2018-02-06 16:33 ` Keith Busch
  0 siblings, 2 replies; 11+ messages in thread
From: wenxiong @ 2018-02-05 21:49 UTC (permalink / raw)


From: Wen Xiong <wenxiong@linux.vnet.ibm.com>

With b2a0eb1a0ac72869c910a79d935a0b049ec78ad9(nvme-pci: Remove watchdog
timer), EEH recovery stops working on ppc.

After removing whatdog timer routine, when trigger EEH on ppc, we hit
EEH in nvme_timeout(). We would like to check if pci channel is offline
or not at the beginning of nvme_timeout(), if it is already offline,
we don't need to do future nvme timeout process.

With the patch, EEH recovery works successfuly on ppc.

Signed-off-by: Wen Xiong <wenxiong at linux.vnet.ibm.com>

[  232.585495] EEH: PHB#3 failure detected, location: N/A
[  232.585545] CPU: 8 PID: 4873 Comm: kworker/8:1H Not tainted
4.14.0-6.el7a.ppc64le #1
[  232.585646] Workqueue: kblockd blk_mq_timeout_work
[  232.585705] Call Trace:
[  232.585743] [c000003f7a533940] [c000000000c3556c]
dump_stack+0xb0/0xf4 (unreliable)
[  232.585823] [c000003f7a533980] [c000000000043eb0]
eeh_check_failure+0x290/0x630
[  232.585924] [c000003f7a533a30] [c008000011063f30]
nvme_timeout+0x1f0/0x410 [nvme]
[  232.586038] [c000003f7a533b00] [c000000000637fc8]
blk_mq_check_expired+0x118/0x1a0
[  232.586134] [c000003f7a533b80] [c00000000063e65c]
bt_for_each+0x11c/0x200
[  232.586191] [c000003f7a533be0] [c00000000063f1f8]
blk_mq_queue_tag_busy_iter+0x78/0x110
[  232.586272] [c000003f7a533c30] [c0000000006367b8]
blk_mq_timeout_work+0xa8/0x1c0
[  232.586351] [c000003f7a533c80] [c00000000015d5ec]
process_one_work+0x1bc/0x5f0
[  232.586431] [c000003f7a533d20] [c00000000016060c]
worker_thread+0xac/0x6b0
[  232.586485] [c000003f7a533dc0] [c00000000016a528] kthread+0x168/0x1b0
[  232.586539] [c000003f7a533e30] [c00000000000b4e8]
ret_from_kernel_thread+0x5c/0x74
[  232.586640] nvme nvme0: I/O 10 QID 0 timeout, reset controller
[  232.586640] EEH: Detected error on PHB#3
[  232.586642] EEH: This PCI device has failed 1 times in the last hour
[  232.586642] EEH: Notify device drivers to shutdown
[  232.586645] nvme nvme0: frozen state error detected, reset controller
[  234.098667] EEH: Collect temporary log
[  234.098694] PHB4 PHB#3 Diag-data (Version: 1)
[  234.098728] brdgCtl:    00000002
[  234.098748] RootSts:    00070020 00402000 c1010008 00100107 00000000
[  234.098807] RootErrSts: 00000000 00000020 00000001
[  234.098878] nFir:       0000800000000000 0030001c00000000
0000800000000000
[  234.098937] PhbSts:     0000001800000000 0000001800000000
[  234.098990] Lem:        0000000100000100 0000000000000000
0000000100000000
[  234.099067] PhbErr:     000004a000000000 0000008000000000
2148000098000240 a008400000000000
[  234.099140] RxeMrgErr:  0000000000000001 0000000000000001
0000000000000000 0000000000000000
[  234.099250] PcieDlp:    0000000000000000 0000000000000000
8000000000000000
[  234.099326] RegbErr:    00d0000010000000 0000000010000000
8800005800000000 0000000007011000
[  234.099418] EEH: Reset without hotplug activity
[  237.317675] nvme 0003:01:00.0: Refused to change power state,
currently in D3
[  237.317740] nvme 0003:01:00.0: Using 64-bit DMA iommu bypass
[  237.317797] nvme nvme0: Removing after probe failure status: -19
[  361.139047689,3] PHB#0003[0:3]: Escalating freeze to fence
PESTA[0]=a440002a01000000
[  237.617706] EEH: Notify device drivers the completion of reset
[  237.617754] nvme nvme0: restart after slot reset
[  237.617834] EEH: Notify device driver to resume
[  238.777746] nvme0n1: detected capacity change from 24576000000 to 0
[  238.777841] nvme0n2: detected capacity change from 24576000000 to 0
[  238.777944] nvme0n3: detected capacity change from 24576000000 to 0
[  238.778019] nvme0n4: detected capacity change from 24576000000 to 0
[  238.778132] nvme0n5: detected capacity change from 24576000000 to 0
[  238.778222] nvme0n6: detected capacity change from 24576000000 to 0
[  238.778314] nvme0n7: detected capacity change from 24576000000 to 0
[  238.778416] nvme0n8: detected capacity change from 24576000000 to 0
---
 drivers/nvme/host/pci.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6fe7af0..4809f3d 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1153,12 +1153,6 @@ static bool nvme_should_reset(struct nvme_dev *dev, u32 csts)
 	if (!(csts & NVME_CSTS_CFS) && !nssro)
 		return false;
 
-	/* If PCI error recovery process is happening, we cannot reset or
-	 * the recovery mechanism will surely fail.
-	 */
-	if (pci_channel_offline(to_pci_dev(dev->dev)))
-		return false;
-
 	return true;
 }
 
@@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 	struct nvme_command cmd;
 	u32 csts = readl(dev->bar + NVME_REG_CSTS);
 
+	/* If PCI error recovery process is happening, we cannot reset or
+	 * the recovery mechanism will surely fail.
+	 */
+	if (pci_channel_offline(to_pci_dev(dev->dev)))
+		return BLK_EH_HANDLED;
+
 	/*
 	 * Reset immediately if the controller is failed
 	 */
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-05 21:49 [PATCH]nvme-pci: Fixes EEH failure on ppc wenxiong
@ 2018-02-06  9:54 ` Sagi Grimberg
  2018-02-06 16:33 ` Keith Busch
  1 sibling, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2018-02-06  9:54 UTC (permalink / raw)




On 02/05/2018 11:49 PM, wenxiong@vmlinux.vnet.ibm.com wrote:
> From: Wen Xiong <wenxiong at linux.vnet.ibm.com>
> 
> With b2a0eb1a0ac72869c910a79d935a0b049ec78ad9(nvme-pci: Remove watchdog
> timer), EEH recovery stops working on ppc.
> 
> After removing whatdog timer routine, when trigger EEH on ppc, we hit
> EEH in nvme_timeout(). We would like to check if pci channel is offline
> or not at the beginning of nvme_timeout(), if it is already offline,
> we don't need to do future nvme timeout process.

This makes sense to me.

Reviewed-by: Sagi Grimberg<sagi at grimberg.me>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-05 21:49 [PATCH]nvme-pci: Fixes EEH failure on ppc wenxiong
  2018-02-06  9:54 ` Sagi Grimberg
@ 2018-02-06 16:33 ` Keith Busch
  2018-02-06 16:55   ` wenxiong
  2018-02-06 20:01   ` wenxiong
  1 sibling, 2 replies; 11+ messages in thread
From: Keith Busch @ 2018-02-06 16:33 UTC (permalink / raw)


On Mon, Feb 05, 2018@03:49:40PM -0600, wenxiong@vmlinux.vnet.ibm.com wrote:
> @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
>  	struct nvme_command cmd;
>  	u32 csts = readl(dev->bar + NVME_REG_CSTS);
>  
> +	/* If PCI error recovery process is happening, we cannot reset or
> +	 * the recovery mechanism will surely fail.
> +	 */
> +	if (pci_channel_offline(to_pci_dev(dev->dev)))
> +		return BLK_EH_HANDLED;
> +

This patch will tell the block layer to complete the request and consider
it a success, but it doesn't look like the command actually completed at
all. You're going to get data corruption this way, right? Is returning
BLK_EH_HANDLED immediately really the right thing to do here?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-06 16:33 ` Keith Busch
@ 2018-02-06 16:55   ` wenxiong
  2018-02-06 17:02     ` Keith Busch
  2018-02-06 20:01   ` wenxiong
  1 sibling, 1 reply; 11+ messages in thread
From: wenxiong @ 2018-02-06 16:55 UTC (permalink / raw)


On 2018-02-06 10:33, Keith Busch wrote:
> On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxiong at vmlinux.vnet.ibm.com 
> wrote:
>> @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return 
>> nvme_timeout(struct request *req, bool reserved)
>>  	struct nvme_command cmd;
>>  	u32 csts = readl(dev->bar + NVME_REG_CSTS);
>> 
>> +	/* If PCI error recovery process is happening, we cannot reset or
>> +	 * the recovery mechanism will surely fail.
>> +	 */
>> +	if (pci_channel_offline(to_pci_dev(dev->dev)))
>> +		return BLK_EH_HANDLED;
>> +
> 
> This patch will tell the block layer to complete the request and 
> consider
> it a success, but it doesn't look like the command actually completed 
> at
> all. You're going to get data corruption this way, right? Is returning
> BLK_EH_HANDLED immediately really the right thing to do here?
> 
Hi Keith,

Do you think we can return with BLK_EH_NOT_HANDLED?
enum blk_eh_timer_return {
         BLK_EH_NOT_HANDLED,
         BLK_EH_HANDLED,
         BLK_EH_RESET_TIMER,
};

Probably need to change the following return value as well.
         /*
          * Reset immediately if the controller is failed
          */
         if (nvme_should_reset(dev, csts)) {
                 nvme_warn_reset(dev, csts);
                 nvme_dev_disable(dev, false);
                 nvme_reset_ctrl(&dev->ctrl);
                 return BLK_EH_HANDLED;
         }

Let me know. I can re-build the kernel and try it.

Thanks,
Wendy
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-06 16:55   ` wenxiong
@ 2018-02-06 17:02     ` Keith Busch
  2018-02-06 17:08       ` wenxiong
  0 siblings, 1 reply; 11+ messages in thread
From: Keith Busch @ 2018-02-06 17:02 UTC (permalink / raw)


On Tue, Feb 06, 2018@10:55:41AM -0600, wenxiong wrote:
> On 2018-02-06 10:33, Keith Busch wrote:
> > On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxiong at vmlinux.vnet.ibm.com
> > wrote:
> > > @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return
> > > nvme_timeout(struct request *req, bool reserved)
> > >  	struct nvme_command cmd;
> > >  	u32 csts = readl(dev->bar + NVME_REG_CSTS);
> > > 
> > > +	/* If PCI error recovery process is happening, we cannot reset or
> > > +	 * the recovery mechanism will surely fail.
> > > +	 */
> > > +	if (pci_channel_offline(to_pci_dev(dev->dev)))
> > > +		return BLK_EH_HANDLED;
> > > +
> > 
> > This patch will tell the block layer to complete the request and
> > consider
> > it a success, but it doesn't look like the command actually completed at
> > all. You're going to get data corruption this way, right? Is returning
> > BLK_EH_HANDLED immediately really the right thing to do here?
> > 
> Hi Keith,
> 
> Do you think we can return with BLK_EH_NOT_HANDLED?

Maybe. I'm not familiar with how the EEH handling is going to go. Do
you expect some other recovery to get the driver to either see a natural
completion at some point or recover it some other way?

> Probably need to change the following return value as well.
>         /*
>          * Reset immediately if the controller is failed
>          */
>         if (nvme_should_reset(dev, csts)) {
>                 nvme_warn_reset(dev, csts);
>                 nvme_dev_disable(dev, false);
>                 nvme_reset_ctrl(&dev->ctrl);
>                 return BLK_EH_HANDLED;
>         }

This is fine as-is. nvme_dev_disable reclaims all outstanding IO, so
there's no way the timed out command has not been handled, making this
the appropriate return code here.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-06 17:02     ` Keith Busch
@ 2018-02-06 17:08       ` wenxiong
  2018-02-06 17:15         ` Keith Busch
  0 siblings, 1 reply; 11+ messages in thread
From: wenxiong @ 2018-02-06 17:08 UTC (permalink / raw)


On 2018-02-06 11:02, Keith Busch wrote:
> On Tue, Feb 06, 2018@10:55:41AM -0600, wenxiong wrote:
>> On 2018-02-06 10:33, Keith Busch wrote:
>> > On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxiong at vmlinux.vnet.ibm.com
>> > wrote:
>> > > @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return
>> > > nvme_timeout(struct request *req, bool reserved)
>> > >  	struct nvme_command cmd;
>> > >  	u32 csts = readl(dev->bar + NVME_REG_CSTS);
>> > >
>> > > +	/* If PCI error recovery process is happening, we cannot reset or
>> > > +	 * the recovery mechanism will surely fail.
>> > > +	 */
>> > > +	if (pci_channel_offline(to_pci_dev(dev->dev)))
>> > > +		return BLK_EH_HANDLED;
>> > > +
>> >
>> > This patch will tell the block layer to complete the request and
>> > consider
>> > it a success, but it doesn't look like the command actually completed at
>> > all. You're going to get data corruption this way, right? Is returning
>> > BLK_EH_HANDLED immediately really the right thing to do here?
>> >
>> Hi Keith,
>> 
>> Do you think we can return with BLK_EH_NOT_HANDLED?
> 
> Maybe. I'm not familiar with how the EEH handling is going to go. Do
> you expect some other recovery to get the driver to either see a 
> natural
> completion at some point or recover it some other way?
> 

Powerpc kernel code/nvme driver eeh callback functions are  going to 
recover it at this point.


Thanks,
Wendy

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-06 17:08       ` wenxiong
@ 2018-02-06 17:15         ` Keith Busch
  2018-02-06 18:00           ` wenxiong
  0 siblings, 1 reply; 11+ messages in thread
From: Keith Busch @ 2018-02-06 17:15 UTC (permalink / raw)


On Tue, Feb 06, 2018@11:08:14AM -0600, wenxiong wrote:
> 
> Powerpc kernel code/nvme driver eeh callback functions are  going to recover
> it at this point.

Are these using the registered pci_err_handler callbacks? If so, it
may be okay to return NOT_HANDLED from the timeout handler.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-06 17:15         ` Keith Busch
@ 2018-02-06 18:00           ` wenxiong
  0 siblings, 0 replies; 11+ messages in thread
From: wenxiong @ 2018-02-06 18:00 UTC (permalink / raw)


On 2018-02-06 11:15, Keith Busch wrote:
> 
> Are these using the registered pci_err_handler callbacks? If so, it
> may be okay to return NOT_HANDLED from the timeout handler.
> 
Yes. I changed to return BLK_EH_NOT_HANDLED. EEH got recovery but nvme 
list command hung. I did this:

#nvme subsystem-reset /dev/nvme0 --->trigger EEH on ppc.

#nvme list --------->give some traffic to trigger EEH


nvme            D    0  9916   8538 0x00040082
[ 1316.727442] Call Trace:
[ 1316.727488] [c000000f6deaf740] [c00000000001ccc0] 
__switch_to+0x330/0x660
[ 1316.727620] [c000000f6deaf7a0] [c000000000c5b724] 
__schedule+0x354/0xaf0
[ 1316.727692] [c000000f6deaf870] [c000000000c5bf08] schedule+0x48/0xc0
[ 1316.727813] [c000000f6deaf8a0] [c000000000c62664] 
schedule_timeout+0x374/0x580
[ 1316.727891] [c000000f6deaf990] [c000000000c5b398] 
io_schedule_timeout+0x68/0xa0
[ 1316.727988] [c000000f6deaf9c0] [c000000000c5d968] 
wait_for_common_io.constprop.6+0x178/0x280
[ 1316.728130] [c000000f6deafa40] [c000000000634d0c] 
blk_execute_rq+0x9c/0xf0
[ 1316.728196] [c000000f6deafab0] [c008000015ca2e48] 
nvme_submit_user_cmd+0xf8/0x3a0 [nvme_core]
[ 1316.728346] [c000000f6deafb30] [c008000015ca78d0] 
nvme_user_cmd+0x250/0x3f0 [nvme_core]
[ 1316.728440] [c000000f6deafc70] [c0000000006496d8] 
blkdev_ioctl+0x7d8/0x1120
[ 1316.728522] [c000000f6deafce0] [c0000000004b9494] 
block_ioctl+0x64/0xd0
[ 1316.728635] [c000000f6deafd20] [c000000000467500] 
do_vfs_ioctl+0xe0/0xa80
[ 1316.728749] [c000000f6deafde0] [c000000000467f74] 
SyS_ioctl+0xd4/0x130
[ 1316.728831] [c000000f6deafe30] [c00000000000b184] 
system_call+0x58/0x6c

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-06 16:33 ` Keith Busch
  2018-02-06 16:55   ` wenxiong
@ 2018-02-06 20:01   ` wenxiong
  2018-02-07  1:24     ` Ming Lei
  1 sibling, 1 reply; 11+ messages in thread
From: wenxiong @ 2018-02-06 20:01 UTC (permalink / raw)


On 2018-02-06 10:33, Keith Busch wrote:
> On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxiong at vmlinux.vnet.ibm.com 
> wrote:
>> @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return 
>> nvme_timeout(struct request *req, bool reserved)
>>  	struct nvme_command cmd;
>>  	u32 csts = readl(dev->bar + NVME_REG_CSTS);
>> 
>> +	/* If PCI error recovery process is happening, we cannot reset or
>> +	 * the recovery mechanism will surely fail.
>> +	 */
>> +	if (pci_channel_offline(to_pci_dev(dev->dev)))
>> +		return BLK_EH_HANDLED;
>> +
> 
> This patch will tell the block layer to complete the request and 
> consider
> it a success, but it doesn't look like the command actually completed 
> at
> all. You're going to get data corruption this way, right? Is returning
> BLK_EH_HANDLED immediately really the right thing to do here?

Hi Ming,

Can you help checking if it is ok if returning BLK_EH_HANDLEDED in this 
case?

Thanks,
Wendy

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-06 20:01   ` wenxiong
@ 2018-02-07  1:24     ` Ming Lei
  2018-02-07 20:19       ` wenxiong
  0 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2018-02-07  1:24 UTC (permalink / raw)


On Tue, Feb 06, 2018@02:01:05PM -0600, wenxiong wrote:
> On 2018-02-06 10:33, Keith Busch wrote:
> > On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxiong at vmlinux.vnet.ibm.com
> > wrote:
> > > @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return
> > > nvme_timeout(struct request *req, bool reserved)
> > >  	struct nvme_command cmd;
> > >  	u32 csts = readl(dev->bar + NVME_REG_CSTS);
> > > 
> > > +	/* If PCI error recovery process is happening, we cannot reset or
> > > +	 * the recovery mechanism will surely fail.
> > > +	 */
> > > +	if (pci_channel_offline(to_pci_dev(dev->dev)))
> > > +		return BLK_EH_HANDLED;
> > > +
> > 
> > This patch will tell the block layer to complete the request and
> > consider
> > it a success, but it doesn't look like the command actually completed at
> > all. You're going to get data corruption this way, right? Is returning
> > BLK_EH_HANDLED immediately really the right thing to do here?
> 
> Hi Ming,
> 
> Can you help checking if it is ok if returning BLK_EH_HANDLEDED in this
> case?

Hi Wenxiong,

Looks Keith is correct, and this timed out request will be completed by
block layer and NVMe driver if BLK_EH_HANDLED is returned, but this IO
isn't completed actually, so either data loss(write) or read failure is
caused.

Maybe BLK_EH_RESET_TIMER is fine under this situation.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH]nvme-pci: Fixes EEH failure on ppc
  2018-02-07  1:24     ` Ming Lei
@ 2018-02-07 20:19       ` wenxiong
  0 siblings, 0 replies; 11+ messages in thread
From: wenxiong @ 2018-02-07 20:19 UTC (permalink / raw)


On 2018-02-06 19:24, Ming Lei wrote:
> On Tue, Feb 06, 2018@02:01:05PM -0600, wenxiong wrote:
>> On 2018-02-06 10:33, Keith Busch wrote:
>> > On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxiong at vmlinux.vnet.ibm.com
>> > wrote:
>> > > @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return
>> > > nvme_timeout(struct request *req, bool reserved)
>> > >  	struct nvme_command cmd;
>> > >  	u32 csts = readl(dev->bar + NVME_REG_CSTS);
>> > >
>> > > +	/* If PCI error recovery process is happening, we cannot reset or
>> > > +	 * the recovery mechanism will surely fail.
>> > > +	 */
>> > > +	if (pci_channel_offline(to_pci_dev(dev->dev)))
>> > > +		return BLK_EH_HANDLED;
>> > > +
>> >
>> > This patch will tell the block layer to complete the request and
>> > consider
>> > it a success, but it doesn't look like the command actually completed at
>> > all. You're going to get data corruption this way, right? Is returning
>> > BLK_EH_HANDLED immediately really the right thing to do here?
>> 
>> Hi Ming,
>> 
>> Can you help checking if it is ok if returning BLK_EH_HANDLEDED in 
>> this
>> case?
> 
> Hi Wenxiong,
> 
> Looks Keith is correct, and this timed out request will be completed by
> block layer and NVMe driver if BLK_EH_HANDLED is returned, but this IO
> isn't completed actually, so either data loss(write) or read failure is
> caused.
> 
> Maybe BLK_EH_RESET_TIMER is fine under this situation.
> 
> Thanks,
> Ming
> 
Hi Ming,

Thanks! I have tried with BLK_EH_RESET_TIMER and EEH recovery works 
fine. I am going to resubmit the patch.

Thanks,
Wendy

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-02-07 20:19 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-05 21:49 [PATCH]nvme-pci: Fixes EEH failure on ppc wenxiong
2018-02-06  9:54 ` Sagi Grimberg
2018-02-06 16:33 ` Keith Busch
2018-02-06 16:55   ` wenxiong
2018-02-06 17:02     ` Keith Busch
2018-02-06 17:08       ` wenxiong
2018-02-06 17:15         ` Keith Busch
2018-02-06 18:00           ` wenxiong
2018-02-06 20:01   ` wenxiong
2018-02-07  1:24     ` Ming Lei
2018-02-07 20:19       ` wenxiong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox