From: Kashyap Desai <kashyap.desai@avagotech.com>
To: Lee Duncan <lduncan@suse.com>, Hannes Reinecke <hare@suse.de>,
Sumit Saxena <sumit.saxena@avagotech.com>
Cc: "PDL,MEGARAIDLINUX" <megaraidlinux.pdl@avagotech.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
James Bottomley <james.bottomley@hansenpartnership.com>,
linux-scsi@vger.kernel.org
Subject: RE: [PATCH] megaraid_sas: Fallback to older scanning if not disks are found
Date: Mon, 18 Jan 2016 10:44:31 +0530 [thread overview]
Message-ID: <7df34cf7b9d82b4bfcb79c3630fbf373@mail.gmail.com> (raw)
In-Reply-To: <56996BC1.40400@suse.com>
> -----Original Message-----
> From: Lee Duncan [mailto:lduncan@suse.com]
> Sent: Saturday, January 16, 2016 3:29 AM
> To: Hannes Reinecke; Sumit Saxena
> Cc: Kashyap Desai; megaraidlinux.pdl@avagotech.com; Martin K. Petersen;
> James Bottomley; linux-scsi@vger.kernel.org
> Subject: Re: [PATCH] megaraid_sas: Fallback to older scanning if not
disks
> are found
>
> On 01/15/2016 06:13 AM, Hannes Reinecke wrote:
> > commit 21c9e160a51383d4cb0b882398534b0c95c0cc3b implemented a
> new
> > driver lookup using the MR_DCMD_LD_LIST_QUERY firmware command.
> > However, this command might not work properly on older firmware,
> > causing the command to return no drives instead of an error.
> > This causes a regression on older firmware as the driver will no
> > longer detect any drives.
> > This patch checks if MR_DCMD_LD_LIST_QUERY return no drives, and falls
> > back to the original method if so.
> >
> > Signed-off-by: Hannes Reinecke <hare@suse.de>
> > ---
> > drivers/scsi/megaraid/megaraid_sas_base.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c
> > b/drivers/scsi/megaraid/megaraid_sas_base.c
> > index f97ec34..79dff70 100644
> > --- a/drivers/scsi/megaraid/megaraid_sas_base.c
> > +++ b/drivers/scsi/megaraid/megaraid_sas_base.c
> > @@ -4107,6 +4107,10 @@ megasas_ld_list_query(struct
> megasas_instance *instance, u8 query_type)
> > ret = megasas_issue_polled(instance, cmd);
> >
> > tgtid_count = le32_to_cpu(ci->count);
> > + if (tgtid_count == 0) {
> > + /* No drives found, try the older LD list DCMD */
> > + ret = 1;
> > + }
> >
> > if ((ret == 0) && (tgtid_count <= (instance-
> >fw_supported_vd_count))) {
> > memset(instance->ld_ids, 0xff, MEGASAS_MAX_LD_IDS);
> >
>
> Reviewed-by: Lee Duncan <lduncan@suse.com>
NACK as fix is already provided in another patch.
Please review patch - http://marc.info/?l=linux-scsi&m=145044529215209&w=2
It has changes to handle this particular issue as well along with many
other areas of MFI DCMD timeout.
prev parent reply other threads:[~2016-01-18 5:14 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-15 14:13 [PATCH] megaraid_sas: Fallback to older scanning if not disks are found Hannes Reinecke
2016-01-15 21:59 ` Lee Duncan
2016-01-18 5:14 ` Kashyap Desai [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7df34cf7b9d82b4bfcb79c3630fbf373@mail.gmail.com \
--to=kashyap.desai@avagotech.com \
--cc=hare@suse.de \
--cc=james.bottomley@hansenpartnership.com \
--cc=lduncan@suse.com \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=megaraidlinux.pdl@avagotech.com \
--cc=sumit.saxena@avagotech.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).