netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yuval Mintz <Yuval.Mintz@cavium.com>
To: <davem@davemloft.net>, <netdev@vger.kernel.org>
Cc: Yuval Mintz <Yuval.Mintz@cavium.com>
Subject: [PATCH net-next 0/7] qed: load/unload mfw series
Date: Tue, 28 Mar 2017 15:12:49 +0300	[thread overview]
Message-ID: <1490703176-429-1-git-send-email-Yuval.Mintz@cavium.com> (raw)

This series correct the unload flow and greatly enhances its
initialization flow in regard to interactions between driver
and management firmware.

Patch #1 makes sure unloading is done under management-firmware's
'criticial section' protection.

Patches #2 - #4 move driver into using a newer scheme for loading
in regard to the MFW; This newer scheme would help cleaning the device
in case a previous instance has dirtied it [preboot, PDA, etc.].

Patches #5 - #6 let driver inform management-firmware on number of
resources which are dependent on the non-management firmware used.
Patch #7 then uses a new resource [BDQ] instead of some set value.

Dave,

Please consider applying this series to 'net-next'.

Thanks,
Yuval

Tomer Tayar (4):
  qed: Correct HW stop flow
  qed: Move to new load request scheme
  qed: Support management-based resource locking
  qed: Utilize resource-lock based scheme

Yuval Mintz (3):
  qed: hw_init() to receive parameter-struct
  qed: Send pf-flr as part of initialization
  qed: Use BDQ resource for storage protocols

 drivers/net/ethernet/qlogic/qed/qed.h         |  33 +-
 drivers/net/ethernet/qlogic/qed/qed_dcbx.h    |   3 -
 drivers/net/ethernet/qlogic/qed/qed_dev.c     | 579 ++++++++++--------
 drivers/net/ethernet/qlogic/qed/qed_dev_api.h |  73 ++-
 drivers/net/ethernet/qlogic/qed/qed_fcoe.c    |  30 +-
 drivers/net/ethernet/qlogic/qed/qed_hsi.h     |  87 ++-
 drivers/net/ethernet/qlogic/qed/qed_iscsi.c   |  32 +-
 drivers/net/ethernet/qlogic/qed/qed_main.c    |  50 +-
 drivers/net/ethernet/qlogic/qed/qed_mcp.c     | 819 +++++++++++++++++++++++---
 drivers/net/ethernet/qlogic/qed/qed_mcp.h     | 179 +++++-
 10 files changed, 1480 insertions(+), 405 deletions(-)

-- 
1.9.3

             reply	other threads:[~2017-03-28 12:13 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-28 12:12 Yuval Mintz [this message]
2017-03-28 12:12 ` [PATCH net-next 1/7] qed: Correct HW stop flow Yuval Mintz
2017-03-28 12:12 ` [PATCH net-next 2/7] qed: hw_init() to receive parameter-struct Yuval Mintz
2017-03-28 12:12 ` [PATCH net-next 3/7] qed: Move to new load request scheme Yuval Mintz
2017-03-28 12:12 ` [PATCH net-next 4/7] qed: Send pf-flr as part of initialization Yuval Mintz
2017-03-28 12:12 ` [PATCH net-next 5/7] qed: Support management-based resource locking Yuval Mintz
2017-03-28 12:12 ` [PATCH net-next 6/7] qed: Utilize resource-lock based scheme Yuval Mintz
2017-03-28 12:12 ` [PATCH net-next 7/7] qed: Use BDQ resource for storage protocols Yuval Mintz
2017-03-29  1:06 ` [PATCH net-next 0/7] qed: load/unload mfw series David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1490703176-429-1-git-send-email-Yuval.Mintz@cavium.com \
    --to=yuval.mintz@cavium.com \
    --cc=davem@davemloft.net \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).