public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Jens Axboe <axboe@kernel.dk>
Cc: Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
	Chaitanya Kulkarni <kch@nvidia.com>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: [GIT PULL] nvme fixes for Linux 6.3
Date: Thu, 16 Mar 2023 08:58:41 +0100	[thread overview]
Message-ID: <ZBLMMaLFyRsgOvGp@infradead.org> (raw)

The following changes since commit b6402014cab0481bdfd1ffff3e1dad714e8e1205:

  block: null_blk: cleanup null_queue_rq() (2023-03-15 06:50:24 -0600)

are available in the Git repository at:

  git://git.infradead.org/nvme.git tags/nvme-6.3-2022-03-16

for you to fetch changes up to 6173a77b7e9d3e202bdb9897b23f2a8afe7bf286:

  nvmet: avoid potential UAF in nvmet_req_complete() (2023-03-15 14:58:53 +0100)

----------------------------------------------------------------
nvme fixes for Linux 6.3

 - avoid potential UAF in nvmet_req_complete (Damien Le Moal)
 - more quirks (Elmer Miroslav Mosher Golovin, Philipp Geulen)
 - fix a memory leak in the nvme-pci probe teardown path (Irvin Cote)
 - repair the MAINTAINERS entry (Lukas Bulwahn)
 - fix handling single range discard request (Ming Lei)
 - show more opcode names in trace events (Minwoo Im)
 - fix nvme-tcp timeout reporting (Sagi Grimberg)

----------------------------------------------------------------
Damien Le Moal (1):
      nvmet: avoid potential UAF in nvmet_req_complete()

Elmer Miroslav Mosher Golovin (1):
      nvme-pci: add NVME_QUIRK_BOGUS_NID for Netac NV3000

Irvin Cote (1):
      nvme-pci: fixing memory leak in probe teardown path

Lukas Bulwahn (1):
      MAINTAINERS: repair malformed T: entries in NVM EXPRESS DRIVERS

Ming Lei (1):
      nvme: fix handling single range discard request

Minwoo Im (1):
      nvme-trace: show more opcode names

Philipp Geulen (1):
      nvme-pci: add NVME_QUIRK_BOGUS_NID for Lexar NM620

Sagi Grimberg (2):
      nvme-tcp: fix opcode reporting in the timeout handler
      nvme-tcp: add nvme-tcp pdu size build protection

 MAINTAINERS                |  8 ++++----
 drivers/nvme/host/core.c   | 28 +++++++++++++++++++---------
 drivers/nvme/host/pci.c    |  5 +++++
 drivers/nvme/host/tcp.c    | 33 +++++++++++++++++++++++++++------
 drivers/nvme/target/core.c |  4 +++-
 include/linux/nvme.h       |  5 +++++
 6 files changed, 63 insertions(+), 20 deletions(-)


             reply	other threads:[~2023-03-16  7:58 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-16  7:58 Christoph Hellwig [this message]
2023-03-16 13:02 ` [GIT PULL] nvme fixes for Linux 6.3 Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2023-03-30 22:37 Christoph Hellwig
2023-03-30 22:39 ` Jens Axboe
2023-03-23  4:07 Christoph Hellwig
2023-03-23 19:02 ` Jens Axboe
2023-02-15 18:17 Christoph Hellwig
2023-02-16  3:25 ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZBLMMaLFyRsgOvGp@infradead.org \
    --to=hch@infradead.org \
    --cc=axboe@kernel.dk \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox