public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Seth Forshee <sforshee@kernel.org>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: Chaitanya Kulkarni <chaitanyak@nvidia.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: nvme-tcp request timeouts
Date: Wed, 12 Oct 2022 23:57:24 -0500	[thread overview]
Message-ID: <Y0eatMZ6Yfkbiox6@do-x1extreme> (raw)
In-Reply-To: <12074b00-7c53-5867-c785-58b96247d682@grimberg.me>

On Wed, Oct 12, 2022 at 08:30:18PM +0300, Sagi Grimberg wrote:
> > o- / ......................................................................................................................... [...]
> >    o- hosts ................................................................................................................... [...]
> >    | o- hostnqn ............................................................................................................... [...]
> >    o- ports ................................................................................................................... [...]
> >    | o- 2 ................................................... [trtype=tcp, traddr=..., trsvcid=4420, inline_data_size=16384]
> >    |   o- ana_groups .......................................................................................................... [...]
> >    |   | o- 1 ..................................................................................................... [state=optimized]
> >    |   o- referrals ........................................................................................................... [...]
> >    |   o- subsystems .......................................................................................................... [...]
> >    |     o- testnqn ........................................................................................................... [...]
> >    o- subsystems .............................................................................................................. [...]
> >      o- testnqn ............................................................. [version=1.3, allow_any=1, serial=2c2e39e2a551f7febf33]
> >        o- allowed_hosts ....................................................................................................... [...]
> >        o- namespaces .......................................................................................................... [...]
> >          o- 1  [path=/dev/loop0, uuid=8a1561fb-82c3-4e9d-96b9-11c7b590d047, nguid=ef90689c-6c46-d44c-89c1-4067801309a8, grpid=1, enabled]
> 
> Ohh, I'd say that would be the culprit...
> the loop driver uses only a single queue to access the disk. This means that
> all your 100+ nvme-tcp queues are all serializing access on the single loop
> disk queue. This heavy back-pressure bubbles all the way
> back to the host and manifests in IO timeouts when large bursts hit...
> 
> I can say that loop is not the best way to benchmark performance, and
> I'd expect to see such phenomenons when attempting to drive high loads
> to a loop device...

The goal wasn't to benchmark performance with this setup, just to start
getting familiar.

> Maybe you can possibly use a tmpfs file directly instead (nvmet supports
> file backends as well).
> 
> Or maybe you can try to use null_blk with memory_backed=Y modparam (may need
> to define cache_size modparam as well, never tried it with memory
> backing...)? That would be more efficient.

I've got this set up now with an nvme drive as the backend for the
target, and as you predicted the timeouts went away. So it does seem the
problem was with using a loop device. Thanks for the help!

Seth


  reply	other threads:[~2022-10-13  4:57 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-11 15:31 nvme-tcp request timeouts Seth Forshee
2022-10-11 19:30 ` Chaitanya Kulkarni
2022-10-11 20:14   ` Seth Forshee
2022-10-11 20:19     ` Chaitanya Kulkarni
2022-10-11 20:37       ` Seth Forshee
2022-10-12  6:33         ` Sagi Grimberg
2022-10-12 16:55           ` Seth Forshee
2022-10-12 17:30             ` Sagi Grimberg
2022-10-13  4:57               ` Seth Forshee [this message]
2022-10-12  7:51         ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y0eatMZ6Yfkbiox6@do-x1extreme \
    --to=sforshee@kernel.org \
    --cc=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox