public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM/BPF Topic][LSF/MM/BPF Attend] iscsi issue of scale with MNoT
@ 2022-02-17 18:19 lduncan
  2022-03-01 23:34 ` Khazhy Kumykov
  0 siblings, 1 reply; 2+ messages in thread
From: lduncan @ 2022-02-17 18:19 UTC (permalink / raw)
  To: lsf-pc; +Cc: linux-scsi, linux-block

[RESEND -- apologies if you see this more than once]

The iSCSI protocol continues to be used in Linux, but some of the
users push the system past its normal limits. And using multipath just
exacerbates that problem (usually doubling the number of sessions).

I'd like to gather some numbers for open-iscsi (the standard Linux
iSCSI initiator) and the kernel target code (i.e. LIO/targetcli) on
what happens when there are MNoT -- massive numbers of targets.

"Massive" in my case, will be relative, since I don't have access to
a supercomputer, but I believe it will not be too hard to start
pushing the system too far. For example, a recent user problem found
that even at 2000 sessions using multipath, the system takes about 80
seconds to switch paths. Each switch takes 80ms (and they are
currently serialized), but when you multiply that by 1000 it adds up.

For the initiator, I've long suspected some parts of the code were not
designed for scale, so this might give me a chance to find and
possibly address some of these issues.

--
Lee Duncan

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-03-01 23:34 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-02-17 18:19 [LSF/MM/BPF Topic][LSF/MM/BPF Attend] iscsi issue of scale with MNoT lduncan
2022-03-01 23:34 ` Khazhy Kumykov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox